go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sServers\swith\ssupport\sfor\sAPI\schunking\sshould\ssupport\scontinue\slisting\sfrom\sthe\slast\skey\sif\sthe\soriginal\sversion\shas\sbeen\scompacted\saway\,\sthough\sthe\slist\sis\sinconsistent\s\[Slow\]$'
test/e2e/apimachinery/chunking.go:177 k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc There were additional failures detected after the initial failure: [FAILED] Nov 25 10:09:44.506: failed to list events in namespace "chunking-9767": Get "https://35.230.98.143/api/v1/namespaces/chunking-9767/events": dial tcp 35.230.98.143:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 10:09:44.546: Couldn't delete ns: "chunking-9767": Delete "https://35.230.98.143/api/v1/namespaces/chunking-9767": dial tcp 35.230.98.143:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.98.143/api/v1/namespaces/chunking-9767", Err:(*net.OpError)(0xc004ccc500)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-api-machinery] Servers with support for API chunking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:05:41.896 Nov 25 10:05:41.896: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename chunking 11/25/22 10:05:41.898 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 10:05:42.024 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 10:05:42.105 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/apimachinery/chunking.go:51 STEP: creating a large number of resources 11/25/22 10:05:46.565 [It] should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] test/e2e/apimachinery/chunking.go:126 STEP: retrieving the first page 11/25/22 10:06:04.24 Nov 25 10:06:04.346: INFO: Retrieved 40/40 results with rv 8256 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODI1Niwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 STEP: retrieving the second page until the token expires 11/25/22 10:06:04.346 Nov 25 10:06:24.476: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODI1Niwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 10:06:44.448: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODI1Niwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 10:07:04.552: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODI1Niwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 10:07:24.424: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODI1Niwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 10:07:44.435: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODI1Niwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 10:08:04.395: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODI1Niwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 10:08:24.396: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODI1Niwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 10:08:44.394: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODI1Niwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 10:09:04.395: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODI1Niwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 10:09:24.394: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODI1Niwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet STEP: retrieving the second page again with the token received with the error message 11/25/22 10:09:44.386 Nov 25 10:09:44.425: INFO: Unexpected error: failed to list pod templates in namespace: chunking-9767, given inconsistent continue token and limit: 40: <*url.Error | 0xc0051a6300>: { Op: "Get", URL: "https://35.230.98.143/api/v1/namespaces/chunking-9767/podtemplates?limit=40", Err: <*net.OpError | 0xc0050f2230>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0064ac3c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 98, 143], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0022be1e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 10:09:44.425: FAIL: failed to list pod templates in namespace: chunking-9767, given inconsistent continue token and limit: 40: Get "https://35.230.98.143/api/v1/namespaces/chunking-9767/podtemplates?limit=40": dial tcp 35.230.98.143:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc [AfterEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/node/init/init.go:32 Nov 25 10:09:44.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:09:44.466 STEP: Collecting events from namespace "chunking-9767". 11/25/22 10:09:44.466 Nov 25 10:09:44.506: INFO: Unexpected error: failed to list events in namespace "chunking-9767": <*url.Error | 0xc0051a6810>: { Op: "Get", URL: "https://35.230.98.143/api/v1/namespaces/chunking-9767/events", Err: <*net.OpError | 0xc0050f2460>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005b36510>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 98, 143], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0022be740>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 10:09:44.506: FAIL: failed to list events in namespace "chunking-9767": Get "https://35.230.98.143/api/v1/namespaces/chunking-9767/events": dial tcp 35.230.98.143:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0009da5c0, {0xc00219f6d0, 0xd}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002b94d00}, {0xc00219f6d0, 0xd}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0009da650?, {0xc00219f6d0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000f92780) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0014df1b0?, 0xc004cd9fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00258a088?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0014df1b0?, 0x29449fc?}, {0xae73300?, 0xc004cd9f80?, 0x2a6d786?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking tear down framework | framework.go:193 STEP: Destroying namespace "chunking-9767" for this suite. 11/25/22 10:09:44.507 Nov 25 10:09:44.546: FAIL: Couldn't delete ns: "chunking-9767": Delete "https://35.230.98.143/api/v1/namespaces/chunking-9767": dial tcp 35.230.98.143:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.98.143/api/v1/namespaces/chunking-9767", Err:(*net.OpError)(0xc004ccc500)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f92780) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0014df0f0?, 0xc0048a2fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0014df0f0?, 0x0?}, {0xae73300?, 0x5?, 0xc0066423c0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\sjobs\swhen\ssuspended\s\[Slow\]\s\[Conformance\]$'
test/e2e/apps/cronjob.go:111 k8s.io/kubernetes/test/e2e/apps.glob..func2.2() test/e2e/apps/cronjob.go:111 +0x376 There were additional failures detected after the initial failure: [FAILED] Nov 25 10:09:43.155: failed to list events in namespace "cronjob-2167": Get "https://35.230.98.143/api/v1/namespaces/cronjob-2167/events": dial tcp 35.230.98.143:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 10:09:43.195: Couldn't delete ns: "cronjob-2167": Delete "https://35.230.98.143/api/v1/namespaces/cronjob-2167": dial tcp 35.230.98.143:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.98.143/api/v1/namespaces/cronjob-2167", Err:(*net.OpError)(0xc0046ad720)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:06:24.399 Nov 25 10:06:24.399: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/25/22 10:06:24.401 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 10:06:24.66 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 10:06:24.813 [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/apps/cronjob.go:96 STEP: Creating a suspended cronjob 11/25/22 10:06:24.895 STEP: Ensuring no jobs are scheduled 11/25/22 10:06:24.998 STEP: Ensuring no job exists by listing jobs explicitly 11/25/22 10:09:43.037 Nov 25 10:09:43.076: INFO: Unexpected error: Failed to list the CronJobs in namespace cronjob-2167: <*url.Error | 0xc0046b1dd0>: { Op: "Get", URL: "https://35.230.98.143/apis/batch/v1/namespaces/cronjob-2167/jobs", Err: <*net.OpError | 0xc005314a50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00539fa10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 98, 143], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0046cd720>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 10:09:43.077: FAIL: Failed to list the CronJobs in namespace cronjob-2167: Get "https://35.230.98.143/apis/batch/v1/namespaces/cronjob-2167/jobs": dial tcp 35.230.98.143:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func2.2() test/e2e/apps/cronjob.go:111 +0x376 [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 25 10:09:43.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:09:43.116 STEP: Collecting events from namespace "cronjob-2167". 11/25/22 10:09:43.116 Nov 25 10:09:43.155: INFO: Unexpected error: failed to list events in namespace "cronjob-2167": <*url.Error | 0xc004896300>: { Op: "Get", URL: "https://35.230.98.143/api/v1/namespaces/cronjob-2167/events", Err: <*net.OpError | 0xc005314cd0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002fa6090>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 98, 143], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0046cdaa0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 10:09:43.155: FAIL: failed to list events in namespace "cronjob-2167": Get "https://35.230.98.143/api/v1/namespaces/cronjob-2167/events": dial tcp 35.230.98.143:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0011b85c0, {0xc00335efb0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc001088820}, {0xc00335efb0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0011b8650?, {0xc00335efb0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000531860) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00142b770?, 0xc0042f8fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0010e68a8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00142b770?, 0x29449fc?}, {0xae73300?, 0xc0042f8f80?, 0x2a6d786?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-2167" for this suite. 11/25/22 10:09:43.156 Nov 25 10:09:43.195: FAIL: Couldn't delete ns: "cronjob-2167": Delete "https://35.230.98.143/api/v1/namespaces/cronjob-2167": dial tcp 35.230.98.143:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.98.143/api/v1/namespaces/cronjob-2167", Err:(*net.OpError)(0xc0046ad720)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000531860) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00142b6b0?, 0xc003a08fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00142b6b0?, 0x0?}, {0xae73300?, 0x5?, 0xc0035414e8?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\snew\sjobs\swhen\sForbidConcurrent\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000943860) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:09:44.751 Nov 25 10:09:44.751: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/25/22 10:09:44.753 Nov 25 10:09:44.792: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:46.831: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:48.832: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:50.832: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:52.833: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:54.832: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:56.832: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:58.832: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:00.832: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:02.832: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:04.832: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:06.834: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:08.832: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:10.832: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:12.832: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:14.833: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:14.872: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:14.872: INFO: Unexpected error: <*errors.errorString | 0xc000295d70>: { s: "timed out waiting for the condition", } Nov 25 10:10:14.872: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000943860) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 25 10:10:14.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:10:14.912 [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0007921e0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:09:43.921 Nov 25 10:09:43.921: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/25/22 10:09:43.924 Nov 25 10:09:43.963: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:46.003: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:48.004: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:50.004: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:52.004: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:54.003: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:56.005: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:09:58.004: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:00.004: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:02.003: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:04.004: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:06.004: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:08.006: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:10.004: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:12.003: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:14.004: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:14.043: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:14.043: INFO: Unexpected error: <*errors.errorString | 0xc0000d1d10>: { s: "timed out waiting for the condition", } Nov 25 10:10:14.043: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0007921e0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 25 10:10:14.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:10:14.083 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/statefulset/rest.go:69 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc00468a340}, 0xc003478000) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 +0x6d0 There were additional failures detected after the initial failure: [FAILED] Nov 25 10:27:52.545: Get "https://35.230.98.143/apis/apps/v1/namespaces/statefulset-4810/statefulsets": dial tcp 35.230.98.143:443: connect: connection refused In [AfterEach] at: test/e2e/framework/statefulset/rest.go:76 ---------- [FAILED] Nov 25 10:27:52.625: failed to list events in namespace "statefulset-4810": Get "https://35.230.98.143/api/v1/namespaces/statefulset-4810/events": dial tcp 35.230.98.143:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 10:27:52.665: Couldn't delete ns: "statefulset-4810": Delete "https://35.230.98.143/api/v1/namespaces/statefulset-4810": dial tcp 35.230.98.143:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.98.143/api/v1/namespaces/statefulset-4810", Err:(*net.OpError)(0xc0025eaaa0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:14:52.348 Nov 25 10:14:52.348: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/25/22 10:14:52.35 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 10:16:05.31 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 10:16:05.4 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 STEP: Creating service test in namespace statefulset-4810 11/25/22 10:16:06.94 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/apps/statefulset.go:587 STEP: Initializing watcher for selector baz=blah,foo=bar 11/25/22 10:16:06.991 STEP: Creating stateful set ss in namespace statefulset-4810 11/25/22 10:16:07.046 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4810 11/25/22 10:16:07.091 Nov 25 10:16:07.135: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 10:16:17.203: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 10:16:27.195: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 10:16:37.192: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 10:16:47.178: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 10:16:57.183: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 10:17:07.185: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 10:17:17.192: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 10:17:27.187: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 10:17:37.240: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 10:17:47.200: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 10:17:57.204: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 10:18:07.255: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 10:18:17.260: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 10:18:27.185: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 10:18:37.193: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 10:18:47.229: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 10:18:57.203: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 10:19:07.202: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 10:19:17.202: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 11/25/22 10:19:17.202 Nov 25 10:19:17.257: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 10:19:17.638: INFO: rc: 1 Nov 25 10:19:17.638: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 10:19:27.639: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 10:19:28.045: INFO: rc: 1 Nov 25 10:19:28.045: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 10:19:38.046: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 10:19:38.949: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 25 10:19:38.949: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 25 10:19:38.949: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 25 10:19:39.108: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 25 10:19:49.176: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 25 10:19:49.176: INFO: Waiting for statefulset status.replicas updated to 0 Nov 25 10:19:49.347: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999453s Nov 25 10:19:50.390: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.957330551s Nov 25 10:19:51.432: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.915484606s Nov 25 10:19:52.474: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.873346513s Nov 25 10:19:53.518: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.830572963s Nov 25 10:19:54.561: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.786952895s Nov 25 10:19:55.605: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.743728535s Nov 25 10:19:56.648: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.699495033s Nov 25 10:19:57.691: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.656608808s Nov 25 10:19:58.751: INFO: Verifying statefulset ss doesn't scale past 1 for another 613.998095ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 11/25/22 10:19:59.752 Nov 25 10:19:59.801: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 10:20:00.137: INFO: rc: 1 Nov 25 10:20:00.137: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 25 10:20:10.137: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 10:20:10.482: INFO: rc: 1 Nov 25 10:20:10.482: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 25 10:20:20.482: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 10:20:20.829: INFO: rc: 1 Nov 25 10:20:20.829: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 25 10:20:30.830: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 10:20:31.194: INFO: rc: 1 Nov 25 10:20:31.194: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 25 10:20:41.194: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 10:20:41.531: INFO: rc: 1 Nov 25 10:20:41.531: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 25 10:20:51.532: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 10:20:51.870: INFO: rc: 1 Nov 25 10:20:51.870: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 25 10:21:01.870: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4810 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 10:21:02.383: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Nov 25 10:21:02.383: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 25 10:21:02.383: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 25 10:21:02.425: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 6m14.644s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 5m0s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 1m7.24s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:21:12.467: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:21:22.467: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 6m34.645s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 5m20.002s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 1m27.241s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:21:32.467: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:21:42.467: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 6m54.648s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 5m40.005s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 1m47.244s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:21:52.467: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:22:02.467: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 7m14.65s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 6m0.006s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 2m7.246s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:22:12.468: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:22:22.467: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 7m34.651s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 6m20.008s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 2m27.247s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:22:32.467: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:22:42.467: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 7m54.653s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 6m40.01s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 2m47.249s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:22:52.467: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:23:02.468: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 8m14.656s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 7m0.013s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 3m7.253s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:23:12.467: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:23:22.467: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 8m34.658s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 7m20.015s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 3m27.254s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:23:32.484: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:23:42.478: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 8m54.66s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 7m40.017s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 3m47.256s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:23:52.468: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:24:02.467: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 9m14.663s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 8m0.019s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 4m7.259s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select, 3 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:24:12.468: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:24:22.467: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 9m34.665s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 8m20.021s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 4m27.261s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:24:32.467: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:24:42.467: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 9m54.666s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 8m40.023s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 4m47.262s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:24:52.467: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:25:02.468: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 10m14.668s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 9m0.025s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 5m7.264s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:25:12.467: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:25:22.467: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 10m34.67s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 9m20.027s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 5m27.266s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:25:32.508: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:25:42.479: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 10m54.672s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 9m40.029s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 5m47.268s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:25:52.484: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:26:02.479: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 11m14.674s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 10m0.031s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 6m7.27s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 2746 [select, 3 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc00085b020}, {0x7fbcaa0, 0xc0040feec0}, {0xc002e8bf38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc00085b020}, {0xc004259187?, 0x75b5158?}, {0x7facee0?, 0xc0010818a8?}, {0xc002e8bf38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 ------------------------------ Nov 25 10:26:12.527: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:26:22.493: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 11m34.677s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 10m20.034s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 6m27.273s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:26:32.497: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:26:43.078: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 11m54.678s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 10m40.035s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 6m47.275s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:26:52.478: INFO: Found 1 stateful pods, waiting for 3 Nov 25 10:27:02.476: INFO: Found 1 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 12m14.68s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 11m0.037s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 7m7.277s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:27:12.475: INFO: Found 2 stateful pods, waiting for 3 Nov 25 10:27:22.525: INFO: Found 2 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 12m34.682s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 11m20.039s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 7m27.278s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:27:32.505: INFO: Found 2 stateful pods, waiting for 3 Nov 25 10:27:42.478: INFO: Found 2 stateful pods, waiting for 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 12m54.684s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 11m40.041s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4810 (Step Runtime: 7m47.28s) test/e2e/apps/statefulset.go:641 Spec Goroutine goroutine 2744 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:27:52.465: INFO: Unexpected error: <*url.Error | 0xc00357c000>: { Op: "Get", URL: "https://35.230.98.143/api/v1/namespaces/statefulset-4810/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar", Err: <*net.OpError | 0xc00312e0a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00329f230>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 98, 143], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0005cc000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 10:27:52.465: FAIL: Get "https://35.230.98.143/api/v1/namespaces/statefulset-4810/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": dial tcp 35.230.98.143:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc00468a340}, 0xc003478000) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 +0x6d0 E1125 10:27:52.465961 8227 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/framework/statefulset/rest.go", LineNumber:69, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc00468a340}, 0xc003478000)\n\ttest/e2e/framework/statefulset/rest.go:69 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)\n\ttest/e2e/framework/statefulset/wait.go:80\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.10()\n\ttest/e2e/apps/statefulset.go:643 +0x6d0", CustomMessage:""}} (�[1m�[38;5;9mYour Test Panicked�[0m �[38;5;243mtest/e2e/framework/statefulset/rest.go:69�[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. Alternatively, you may have made an assertion outside of a Ginkgo leaf node (e.g. in a container node or some out-of-band function) - please move your assertion to an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...). �[1mLearn more at:�[0m �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m ) goroutine 2744 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x70eb7e0?, 0xc0007f09a0}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0007f09a0?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x70eb7e0, 0xc0007f09a0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc001838540, 0xb6}, {0xc001235540?, 0x75b521a?, 0xc001235560?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc003622160, 0xa1}, {0xc0012355d8?, 0xc003622160?, 0xc001235600?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc00357c000}, {0x0?, 0xc0020fd230?, 0x10?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc00468a340}, 0xc003478000) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004490120, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002461de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00468a340?, 0xc002461e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00468a340}, 0x3, 0x3, 0xc003478000) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 +0x6d0 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000972480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 +0x1b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 +0x98 created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 +0xe3d [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Nov 25 10:27:52.505: INFO: Deleting all statefulset in ns statefulset-4810 Nov 25 10:27:52.545: INFO: Unexpected error: <*url.Error | 0xc00357c750>: { Op: "Get", URL: "https://35.230.98.143/apis/apps/v1/namespaces/statefulset-4810/statefulsets", Err: <*net.OpError | 0xc00312e370>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00329f500>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 98, 143], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0005cc620>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 10:27:52.545: FAIL: Get "https://35.230.98.143/apis/apps/v1/namespaces/statefulset-4810/statefulsets": dial tcp 35.230.98.143:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x801de88, 0xc00468a340}, {0xc0051fc050, 0x10}) test/e2e/framework/statefulset/rest.go:76 +0x113 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:129 +0x1b2 [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 25 10:27:52.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:27:52.585 STEP: Collecting events from namespace "statefulset-4810". 11/25/22 10:27:52.585 Nov 25 10:27:52.625: INFO: Unexpected error: failed to list events in namespace "statefulset-4810": <*url.Error | 0xc00357cc30>: { Op: "Get", URL: "https://35.230.98.143/api/v1/namespaces/statefulset-4810/events", Err: <*net.OpError | 0xc00312e5a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00359fec0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 98, 143], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0005ccbe0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 10:27:52.625: FAIL: failed to list events in namespace "statefulset-4810": Get "https://35.230.98.143/api/v1/namespaces/statefulset-4810/events": dial tcp 35.230.98.143:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0012345c0, {0xc0051fc050, 0x10}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc00468a340}, {0xc0051fc050, 0x10}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001234650?, {0xc0051fc050?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002f8ff0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000f57360?, 0xc002432fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00468a088?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000f57360?, 0x29449fc?}, {0xae73300?, 0xc002432f80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-4810" for this suite. 11/25/22 10:27:52.625 Nov 25 10:27:52.665: FAIL: Couldn't delete ns: "statefulset-4810": Delete "https://35.230.98.143/api/v1/namespaces/statefulset-4810": dial tcp 35.230.98.143:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.98.143/api/v1/namespaces/statefulset-4810", Err:(*net.OpError)(0xc0025eaaa0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0002f8ff0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000f572b0?, 0xc002431fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000f572b0?, 0x0?}, {0xae73300?, 0x5?, 0xc004490378?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\ssupport\sInClusterConfig\swith\stoken\srotation\s\[Slow\]$'
test/e2e/auth/service_accounts.go:520 k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9abfrom junit_01.xml
[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 09:56:35.347 Nov 25 09:56:35.347: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename svcaccounts 11/25/22 09:56:35.349 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 09:56:35.747 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 09:56:35.862 [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 [It] should support InClusterConfig with token rotation [Slow] test/e2e/auth/service_accounts.go:432 Nov 25 09:56:36.102: INFO: created pod Nov 25 09:56:36.102: INFO: Waiting up to 1m0s for 1 pods to be running and ready: [inclusterclient] Nov 25 09:56:36.102: INFO: Waiting up to 1m0s for pod "inclusterclient" in namespace "svcaccounts-1153" to be "running and ready" Nov 25 09:56:36.190: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 87.901434ms Nov 25 09:56:36.190: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 09:56:38.245: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142773912s Nov 25 09:56:38.245: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 09:56:40.427: INFO: Pod "inclusterclient": Phase="Running", Reason="", readiness=true. Elapsed: 4.32503405s Nov 25 09:56:40.427: INFO: Pod "inclusterclient" satisfied condition "running and ready" Nov 25 09:56:40.427: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [inclusterclient] Nov 25 09:56:40.427: INFO: pod is ready Nov 25 09:57:40.427: INFO: polling logs Nov 25 09:57:40.551: INFO: Retrying. Still waiting to see more unique tokens: got=0, want=2 Nov 25 09:58:40.428: INFO: polling logs Nov 25 09:58:40.622: INFO: Retrying. Still waiting to see more unique tokens: got=0, want=2 Nov 25 09:59:40.428: INFO: polling logs Nov 25 09:59:46.200: INFO: Retrying. Still waiting to see more unique tokens: got=0, want=2 Nov 25 10:00:40.428: INFO: polling logs Nov 25 10:00:41.090: INFO: Retrying. Still waiting to see more unique tokens: got=0, want=2 ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m0.624s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m0.001s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:01:40.428: INFO: polling logs Nov 25 10:01:40.491: INFO: Retrying. Still waiting to see more unique tokens: got=0, want=2 ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m20.626s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m20.002s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m40.628s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m40.004s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 6m0.629s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 6m0.006s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:02:40.428: INFO: polling logs Nov 25 10:02:40.589: INFO: Retrying. Still waiting to see more unique tokens: got=0, want=2 ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 6m20.632s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 6m20.008s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 6m40.633s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 6m40.01s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 7m0.636s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 7m0.012s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:03:40.428: INFO: polling logs Nov 25 10:03:40.467: INFO: Error pulling logs: Get "https://35.230.98.143/api/v1/namespaces/svcaccounts-1153/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 35.230.98.143:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 7m20.638s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 7m20.014s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 7m40.639s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 7m40.016s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 8m0.641s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 8m0.017s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:04:40.427: INFO: polling logs Nov 25 10:04:40.574: INFO: Retrying. Still waiting to see more unique tokens: got=0, want=2 ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 8m20.642s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 8m20.019s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 8m40.645s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 8m40.021s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 9m0.647s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 9m0.024s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:05:40.428: INFO: polling logs Nov 25 10:05:40.499: INFO: Retrying. Still waiting to see more unique tokens: got=0, want=2 ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 9m20.649s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 9m20.025s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 9m40.651s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 9m40.028s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 10m0.653s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 10m0.029s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:06:40.428: INFO: polling logs Nov 25 10:06:40.695: INFO: Retrying. Still waiting to see more unique tokens: got=0, want=2 ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 10m20.655s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 10m20.031s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 10m40.656s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 10m40.033s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 11m0.659s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 11m0.036s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:07:40.428: INFO: polling logs Nov 25 10:07:40.521: INFO: Retrying. Still waiting to see more unique tokens: got=0, want=2 ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 11m20.66s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 11m20.037s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 11m40.662s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 11m40.039s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 12m0.665s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 12m0.041s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:08:40.427: INFO: polling logs Nov 25 10:08:40.477: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 12m20.667s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 12m20.044s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 12m40.668s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 12m40.045s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 13m0.671s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 13m0.047s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:09:40.428: INFO: polling logs Nov 25 10:09:40.513: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 13m20.672s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 13m20.049s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 13m40.674s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 13m40.05s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 14m0.675s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 14m0.052s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:10:40.428: INFO: polling logs Nov 25 10:10:43.844: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 14m20.677s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 14m20.054s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 14m40.68s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 14m40.056s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 15m0.682s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 15m0.058s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:11:40.428: INFO: polling logs Nov 25 10:11:40.472: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 15m20.684s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 15m20.061s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 15m40.687s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 15m40.063s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 16m0.689s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 16m0.065s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:12:40.428: INFO: polling logs Nov 25 10:12:40.472: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 16m20.691s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 16m20.068s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 16m40.693s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 16m40.069s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 17m0.695s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 17m0.072s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:13:40.428: INFO: polling logs Nov 25 10:13:40.485: INFO: Retrying. Still waiting to see more unique tokens: got=0, want=2 ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 17m20.696s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 17m20.073s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 17m40.698s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 17m40.075s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 18m0.7s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 18m0.077s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:14:40.428: INFO: polling logs Nov 25 10:14:40.474: INFO: Retrying. Still waiting to see more unique tokens: got=0, want=2 ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 18m20.702s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 18m20.079s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 18m40.704s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 18m40.081s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 19m0.707s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 19m0.084s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:15:40.427: INFO: polling logs Nov 25 10:15:40.472: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 19m20.708s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 19m20.085s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 19m40.71s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 19m40.087s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #10 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 20m0.711s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 20m0.088s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1179 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004f52a38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0052f3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005192cf0, 0xc001b62de0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:16:40.428: INFO: polling logs Nov 25 10:16:40.573: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) Nov 25 10:16:40.573: INFO: polling logs Nov 25 10:16:40.769: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) Nov 25 10:16:40.769: FAIL: Unexpected error: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9ab [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 Nov 25 10:16:40.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:16:40.887 STEP: Collecting events from namespace "svcaccounts-1153". 11/25/22 10:16:40.887 STEP: Found 5 events. 11/25/22 10:16:41 Nov 25 10:16:41.001: INFO: At 2022-11-25 09:56:36 +0000 UTC - event for inclusterclient: {default-scheduler } Scheduled: Successfully assigned svcaccounts-1153/inclusterclient to bootstrap-e2e-minion-group-td9f Nov 25 10:16:41.001: INFO: At 2022-11-25 09:56:38 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-td9f} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 10:16:41.001: INFO: At 2022-11-25 09:56:38 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-td9f} Created: Created container inclusterclient Nov 25 10:16:41.001: INFO: At 2022-11-25 09:56:38 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-td9f} Started: Started container inclusterclient Nov 25 10:16:41.001: INFO: At 2022-11-25 09:56:38 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-td9f} Killing: Stopping container inclusterclient Nov 25 10:16:41.082: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 10:16:41.082: INFO: inclusterclient bootstrap-e2e-minion-group-td9f Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:56:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:56:39 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:56:39 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:56:36 +0000 UTC }] Nov 25 10:16:41.082: INFO: Nov 25 10:16:41.220: INFO: Unable to fetch svcaccounts-1153/inclusterclient/inclusterclient logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) Nov 25 10:16:41.281: INFO: Logging node info for node bootstrap-e2e-master Nov 25 10:16:41.338: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master dac6ac53-9eda-43da-8ee2-1e385d8ec898 11071 0 2022-11-25 09:53:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:53:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-25 10:15:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:15:50 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:15:50 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:15:50 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:15:50 +0000 UTC,LastTransitionTime:2022-11-25 09:53:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.98.143,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:11ab6410c54044f1117cbba631ecc365,SystemUUID:11ab6410-c540-44f1-117c-bba631ecc365,BootID:82f19f55-2c1d-4c12-9a9e-fdf6887a2587,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:124989753,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 10:16:41.339: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 10:16:41.399: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 10:16:41.491: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 10:16:41.491: INFO: Logging node info for node bootstrap-e2e-minion-group-428h Nov 25 10:16:41.552: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-428h d5c337a2-2f90-47b5-836f-933da2f9ab7a 11444 0 2022-11-25 09:53:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-428h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-428h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8555":"bootstrap-e2e-minion-group-428h","csi-hostpath-provisioning-9276":"bootstrap-e2e-minion-group-428h","csi-hostpath-volumemode-9948":"bootstrap-e2e-minion-group-428h"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:13:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:16:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-428h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.189.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8a57494a55998ff6d63e085d45cef3a9,SystemUUID:8a57494a-5599-8ff6-d63e-085d45cef3a9,BootID:558e03ee-b7f3-4498-b6b7-84a1fb7e3be2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1,DevicePath:,},},Config:nil,},} Nov 25 10:16:41.552: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-428h Nov 25 10:16:41.610: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-428h Nov 25 10:16:41.752: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-428h: error trying to reach service: No agent available Nov 25 10:16:41.752: INFO: Logging node info for node bootstrap-e2e-minion-group-n625 Nov 25 10:16:41.826: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-n625 85bcae04-a2ad-40a3-baef-1b591ac3c738 11024 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-n625 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-n625 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-7599":"bootstrap-e2e-minion-group-n625","csi-mock-csi-mock-volumes-7357":"bootstrap-e2e-minion-group-n625"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:13:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:15:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-n625,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:13:07 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:13:07 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:13:07 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:13:07 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.164.225,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a6751b5059a4fe1e837e7af02658d36,SystemUUID:5a6751b5-059a-4fe1-e837-e7af02658d36,BootID:a086eb9d-d388-4c24-8f02-05f416ec4095,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935,DevicePath:,},},Config:nil,},} Nov 25 10:16:41.826: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n625 Nov 25 10:16:41.888: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n625 Nov 25 10:16:41.964: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-n625: error trying to reach service: No agent available Nov 25 10:16:41.964: INFO: Logging node info for node bootstrap-e2e-minion-group-td9f Nov 25 10:16:42.040: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-td9f 4cd357b6-cf3c-48d5-8c26-a8f1a5cdc500 11456 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-td9f kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-td9f topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2475":"bootstrap-e2e-minion-group-td9f","csi-hostpath-multivolume-9001":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-3992":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-9429":"bootstrap-e2e-minion-group-td9f"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:13:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:16:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-td9f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.199.116.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5aab76a7447d43699b2ac1ec202d798a,SystemUUID:5aab76a7-447d-4369-9b2a-c1ec202d798a,BootID:e5508047-abaf-4470-affa-3bd94a2a0e77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24 kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},},Config:nil,},} Nov 25 10:16:42.040: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-td9f Nov 25 10:16:42.109: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-td9f Nov 25 10:16:42.181: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-td9f: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 STEP: Destroying namespace "svcaccounts-1153" for this suite. 11/25/22 10:16:42.181
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swith\s\-\-leave\-stdin\-open$'
test/e2e/kubectl/kubectl.go:589 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.7.7() test/e2e/kubectl/kubectl.go:589 +0x22dfrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:17:39.93 Nov 25 10:17:39.930: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/25/22 10:17:39.932 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 10:17:40.251 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 10:17:40.389 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/25/22 10:17:40.494 Nov 25 10:17:40.494: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3351 create -f -' Nov 25 10:17:40.983: INFO: stderr: "" Nov 25 10:17:40.983: INFO: stdout: "pod/httpd created\n" Nov 25 10:17:40.983: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 25 10:17:40.983: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-3351" to be "running and ready" Nov 25 10:17:41.054: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 70.907188ms Nov 25 10:17:41.054: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 10:17:43.161: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178098445s Nov 25 10:17:43.161: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 10:17:45.138: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155717016s Nov 25 10:17:45.138: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 10:17:47.108: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125462671s Nov 25 10:17:47.108: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 10:17:49.133: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150602858s Nov 25 10:17:49.133: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 10:17:51.143: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.159974141s Nov 25 10:17:51.143: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 10:17:53.157: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.174742972s Nov 25 10:17:53.158: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 10:17:55.357: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.374097312s Nov 25 10:17:55.357: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 10:17:57.126: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.14305084s Nov 25 10:17:57.126: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 10:17:59.299: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.316448334s Nov 25 10:17:59.299: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 10:18:01.135: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.15238849s Nov 25 10:18:01.135: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:40 +0000 UTC }] Nov 25 10:18:03.163: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.18009796s Nov 25 10:18:03.163: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:40 +0000 UTC }] Nov 25 10:18:05.171: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.187834328s Nov 25 10:18:05.171: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:40 +0000 UTC }] Nov 25 10:18:07.124: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.141612417s Nov 25 10:18:07.124: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:40 +0000 UTC }] Nov 25 10:18:09.211: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 28.227896005s Nov 25 10:18:09.211: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:40 +0000 UTC }] Nov 25 10:18:11.127: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 30.144256218s Nov 25 10:18:11.127: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:40 +0000 UTC }] Nov 25 10:18:13.102: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 32.119641814s Nov 25 10:18:13.102: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:40 +0000 UTC }] Nov 25 10:18:15.235: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 34.252419686s Nov 25 10:18:15.235: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:40 +0000 UTC }] Nov 25 10:18:17.148: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 36.165190405s Nov 25 10:18:17.148: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:40 +0000 UTC }] Nov 25 10:18:19.128: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 38.145485529s Nov 25 10:18:19.128: INFO: Pod "httpd" satisfied condition "running and ready" Nov 25 10:18:19.128: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] [Slow] running a failing command with --leave-stdin-open test/e2e/kubectl/kubectl.go:585 Nov 25 10:18:19.128: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3351 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=Never --pod-running-timeout=2m0s failure-4 --leave-stdin-open -- /bin/sh -c exit 42' Nov 25 10:18:23.913: INFO: rc: 1 Nov 25 10:18:23.913: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc000b29450>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3351 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=Never --pod-running-timeout=2m0s failure-4 --leave-stdin-open -- /bin/sh -c exit 42:\nCommand stdout:\n\nstderr:\nIf you don't see a command prompt, try pressing enter.\nwarning: couldn't attach to pod/failure-4, falling back to streaming logs: error dialing backend: No agent available\nError from server: Get \"https://10.138.0.4:10250/containerLogs/kubectl-3351/failure-4/failure-4\": No agent available\n\nerror:\nexit status 1", }, Code: 1, } Nov 25 10:18:23.913: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3351 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=Never --pod-running-timeout=2m0s failure-4 --leave-stdin-open -- /bin/sh -c exit 42: Command stdout: stderr: If you don't see a command prompt, try pressing enter. warning: couldn't attach to pod/failure-4, falling back to streaming logs: error dialing backend: No agent available Error from server: Get "https://10.138.0.4:10250/containerLogs/kubectl-3351/failure-4/failure-4": No agent available error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.7.7() test/e2e/kubectl/kubectl.go:589 +0x22d [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/25/22 10:18:23.913 Nov 25 10:18:23.914: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3351 delete --grace-period=0 --force -f -' Nov 25 10:18:24.330: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 25 10:18:24.330: INFO: stdout: "pod \"httpd\" force deleted\n" Nov 25 10:18:24.330: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3351 get rc,svc -l name=httpd --no-headers' Nov 25 10:18:24.867: INFO: stderr: "No resources found in kubectl-3351 namespace.\n" Nov 25 10:18:24.867: INFO: stdout: "" Nov 25 10:18:24.868: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3351 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 25 10:18:25.272: INFO: stderr: "" Nov 25 10:18:25.272: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 25 10:18:25.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:18:25.401 STEP: Collecting events from namespace "kubectl-3351". 11/25/22 10:18:25.401 STEP: Found 9 events. 11/25/22 10:18:25.526 Nov 25 10:18:25.526: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for failure-4: { } Scheduled: Successfully assigned kubectl-3351/failure-4 to bootstrap-e2e-minion-group-td9f Nov 25 10:18:25.526: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for httpd: { } Scheduled: Successfully assigned kubectl-3351/httpd to bootstrap-e2e-minion-group-n625 Nov 25 10:18:25.526: INFO: At 2022-11-25 10:17:44 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-n625} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Nov 25 10:18:25.526: INFO: At 2022-11-25 10:17:44 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-n625} Created: Created container httpd Nov 25 10:18:25.526: INFO: At 2022-11-25 10:17:44 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-n625} Started: Started container httpd Nov 25 10:18:25.527: INFO: At 2022-11-25 10:18:20 +0000 UTC - event for failure-4: {kubelet bootstrap-e2e-minion-group-td9f} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-8k6l2" : failed to sync configmap cache: timed out waiting for the condition Nov 25 10:18:25.527: INFO: At 2022-11-25 10:18:22 +0000 UTC - event for failure-4: {kubelet bootstrap-e2e-minion-group-td9f} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Nov 25 10:18:25.527: INFO: At 2022-11-25 10:18:22 +0000 UTC - event for failure-4: {kubelet bootstrap-e2e-minion-group-td9f} Created: Created container failure-4 Nov 25 10:18:25.527: INFO: At 2022-11-25 10:18:22 +0000 UTC - event for failure-4: {kubelet bootstrap-e2e-minion-group-td9f} Started: Started container failure-4 Nov 25 10:18:25.686: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 10:18:25.686: INFO: failure-4 bootstrap-e2e-minion-group-td9f Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:18:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:18:24 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:18:24 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:18:19 +0000 UTC }] Nov 25 10:18:25.686: INFO: Nov 25 10:18:25.854: INFO: Unable to fetch kubectl-3351/failure-4/failure-4 logs: an error on the server ("unknown") has prevented the request from succeeding (get pods failure-4) Nov 25 10:18:25.960: INFO: Logging node info for node bootstrap-e2e-master Nov 25 10:18:26.060: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master dac6ac53-9eda-43da-8ee2-1e385d8ec898 11071 0 2022-11-25 09:53:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:53:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-25 10:15:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:15:50 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:15:50 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:15:50 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:15:50 +0000 UTC,LastTransitionTime:2022-11-25 09:53:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.98.143,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:11ab6410c54044f1117cbba631ecc365,SystemUUID:11ab6410-c540-44f1-117c-bba631ecc365,BootID:82f19f55-2c1d-4c12-9a9e-fdf6887a2587,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:124989753,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 10:18:26.061: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 10:18:26.142: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 10:18:26.242: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 10:18:26.242: INFO: Logging node info for node bootstrap-e2e-minion-group-428h Nov 25 10:18:26.301: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-428h d5c337a2-2f90-47b5-836f-933da2f9ab7a 12614 0 2022-11-25 09:53:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-428h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-428h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8555":"bootstrap-e2e-minion-group-428h"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:13:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:18:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-428h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:17:11 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:17:11 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:17:11 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:17:11 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.189.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8a57494a55998ff6d63e085d45cef3a9,SystemUUID:8a57494a-5599-8ff6-d63e-085d45cef3a9,BootID:558e03ee-b7f3-4498-b6b7-84a1fb7e3be2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1,DevicePath:,},},Config:nil,},} Nov 25 10:18:26.301: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-428h Nov 25 10:18:26.370: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-428h Nov 25 10:18:26.514: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-428h: error trying to reach service: No agent available Nov 25 10:18:26.514: INFO: Logging node info for node bootstrap-e2e-minion-group-n625 Nov 25 10:18:26.605: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-n625 85bcae04-a2ad-40a3-baef-1b591ac3c738 12656 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-n625 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-n625 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5017":"bootstrap-e2e-minion-group-n625","csi-hostpath-multivolume-7599":"bootstrap-e2e-minion-group-n625","csi-hostpath-multivolume-829":"bootstrap-e2e-minion-group-n625","csi-hostpath-provisioning-6576":"bootstrap-e2e-minion-group-n625","csi-mock-csi-mock-volumes-4433":"bootstrap-e2e-minion-group-n625","csi-mock-csi-mock-volumes-7357":"bootstrap-e2e-minion-group-n625"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 10:13:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 10:18:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 10:18:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-n625,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:18:25 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:18:25 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:18:25 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:18:25 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.164.225,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a6751b5059a4fe1e837e7af02658d36,SystemUUID:5a6751b5-059a-4fe1-e837-e7af02658d36,BootID:a086eb9d-d388-4c24-8f02-05f416ec4095,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-829^613c9e8d-6caa-11ed-bd8a-92bf1c27657c kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935 kubernetes.io/csi/csi-hostpath-provisioning-6576^6fc0164e-6caa-11ed-b24a-1ecd544dedc3],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-829^613c9e8d-6caa-11ed-bd8a-92bf1c27657c,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-4433^6b975645-6caa-11ed-8363-1a862bfe1832,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-6576^6fc0164e-6caa-11ed-b24a-1ecd544dedc3,DevicePath:,},},Config:nil,},} Nov 25 10:18:26.605: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n625 Nov 25 10:18:26.680: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n625 Nov 25 10:18:26.783: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-n625: error trying to reach service: No agent available Nov 25 10:18:26.783: INFO: Logging node info for node bootstrap-e2e-minion-group-td9f Nov 25 10:18:26.862: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-td9f 4cd357b6-cf3c-48d5-8c26-a8f1a5cdc500 12597 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-td9f kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-td9f topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2475":"bootstrap-e2e-minion-group-td9f","csi-hostpath-multivolume-9001":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-3992":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-9429":"bootstrap-e2e-minion-group-td9f"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:13:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:18:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-td9f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:18:02 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:18:02 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:18:02 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:18:02 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.199.116.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5aab76a7447d43699b2ac1ec202d798a,SystemUUID:5aab76a7-447d-4369-9b2a-c1ec202d798a,BootID:e5508047-abaf-4470-affa-3bd94a2a0e77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24 kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},},Config:nil,},} Nov 25 10:18:26.863: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-td9f Nov 25 10:18:26.932: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-td9f Nov 25 10:18:27.005: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-td9f: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-3351" for this suite. 11/25/22 10:18:27.005
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never$'
test/e2e/kubectl/kubectl.go:567 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.7.5() test/e2e/kubectl/kubectl.go:567 +0x31efrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:16:42.294 Nov 25 10:16:42.294: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/25/22 10:16:42.296 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 10:16:42.492 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 10:16:42.583 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/25/22 10:16:42.67 Nov 25 10:16:42.671: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6689 create -f -' Nov 25 10:16:43.255: INFO: stderr: "" Nov 25 10:16:43.256: INFO: stdout: "pod/httpd created\n" Nov 25 10:16:43.256: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 25 10:16:43.256: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-6689" to be "running and ready" Nov 25 10:16:43.395: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 139.090732ms Nov 25 10:16:43.395: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:16:45.461: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205572636s Nov 25 10:16:45.461: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:16:47.438: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182093361s Nov 25 10:16:47.438: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:16:49.464: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208884322s Nov 25 10:16:49.464: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:16:51.489: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.2339382s Nov 25 10:16:51.490: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:16:53.452: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.196659072s Nov 25 10:16:53.452: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:16:55.445: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.189435997s Nov 25 10:16:55.445: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:16:57.449: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.193597027s Nov 25 10:16:57.449: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:16:59.445: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.189495688s Nov 25 10:16:59.445: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:01.445: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.189434765s Nov 25 10:17:01.445: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:03.476: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.220786079s Nov 25 10:17:03.476: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:05.483: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.227884322s Nov 25 10:17:05.483: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:07.451: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.195808923s Nov 25 10:17:07.451: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:09.478: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.222796855s Nov 25 10:17:09.478: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:11.476: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.22093818s Nov 25 10:17:11.476: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:13.456: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.200031915s Nov 25 10:17:13.456: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:15.463: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.20755925s Nov 25 10:17:15.463: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:17.441: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.185638198s Nov 25 10:17:17.441: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:19.450: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.194453312s Nov 25 10:17:19.450: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:21.451: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 38.195529089s Nov 25 10:17:21.451: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:23.516: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 40.260222026s Nov 25 10:17:23.516: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:25.470: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 42.214138219s Nov 25 10:17:25.470: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 10:17:27.448: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 44.192381967s Nov 25 10:17:27.448: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 10:17:29.441: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 46.18572813s Nov 25 10:17:29.441: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 10:17:31.466: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 48.209966262s Nov 25 10:17:31.466: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 10:17:33.458: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 50.202336745s Nov 25 10:17:33.458: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 10:17:35.519: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 52.263245497s Nov 25 10:17:35.519: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 10:17:37.612: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 54.356779817s Nov 25 10:17:37.612: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 10:17:39.460: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 56.204897591s Nov 25 10:17:39.460: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 10:17:41.448: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 58.192320016s Nov 25 10:17:41.448: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 10:17:43.471: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.215554063s Nov 25 10:17:43.471: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-td9f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC }] Nov 25 10:17:45.487: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.231496417s Nov 25 10:17:45.487: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-td9f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC }] Nov 25 10:17:47.459: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.203188897s Nov 25 10:17:47.459: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-td9f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC }] Nov 25 10:17:49.484: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.228848031s Nov 25 10:17:49.484: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-td9f' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:26 +0000 UTC }] Nov 25 10:17:51.491: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 1m8.235271816s Nov 25 10:17:51.491: INFO: Pod "httpd" satisfied condition "running and ready" Nov 25 10:17:51.491: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] [Slow] running a failing command without --restart=Never test/e2e/kubectl/kubectl.go:558 Nov 25 10:17:51.491: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6689 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=OnFailure --pod-running-timeout=2m0s failure-2 -- /bin/sh -c cat && exit 42' Nov 25 10:17:55.401: INFO: rc: 1 Nov 25 10:17:55.401: FAIL: Missing expected 'timed out' error, got: exec.CodeExitError{Err:(*errors.errorString)(0xc004fecc50), Code:1} Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.7.5() test/e2e/kubectl/kubectl.go:567 +0x31e [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/25/22 10:17:55.401 Nov 25 10:17:55.401: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6689 delete --grace-period=0 --force -f -' Nov 25 10:17:55.888: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 25 10:17:55.888: INFO: stdout: "pod \"httpd\" force deleted\n" Nov 25 10:17:55.888: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6689 get rc,svc -l name=httpd --no-headers' Nov 25 10:17:56.240: INFO: stderr: "No resources found in kubectl-6689 namespace.\n" Nov 25 10:17:56.240: INFO: stdout: "" Nov 25 10:17:56.240: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6689 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 25 10:17:56.478: INFO: stderr: "" Nov 25 10:17:56.478: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 25 10:17:56.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:17:56.557 STEP: Collecting events from namespace "kubectl-6689". 11/25/22 10:17:56.557 STEP: Found 13 events. 11/25/22 10:17:56.61 Nov 25 10:17:56.610: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for failure-2: { } Scheduled: Successfully assigned kubectl-6689/failure-2 to bootstrap-e2e-minion-group-td9f Nov 25 10:17:56.610: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for httpd: { } Scheduled: Successfully assigned kubectl-6689/httpd to bootstrap-e2e-minion-group-td9f Nov 25 10:17:56.610: INFO: At 2022-11-25 10:17:29 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-td9f} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-9zp4r" : failed to sync configmap cache: timed out waiting for the condition Nov 25 10:17:56.610: INFO: At 2022-11-25 10:17:31 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-td9f} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Nov 25 10:17:56.610: INFO: At 2022-11-25 10:17:37 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-td9f} Started: Started container httpd Nov 25 10:17:56.610: INFO: At 2022-11-25 10:17:37 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-td9f} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 5.42325406s (5.423288827s including waiting) Nov 25 10:17:56.610: INFO: At 2022-11-25 10:17:37 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-td9f} Created: Created container httpd Nov 25 10:17:56.610: INFO: At 2022-11-25 10:17:37 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-td9f} Killing: Stopping container httpd Nov 25 10:17:56.610: INFO: At 2022-11-25 10:17:38 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-td9f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 10:17:56.610: INFO: At 2022-11-25 10:17:39 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-td9f} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Nov 25 10:17:56.610: INFO: At 2022-11-25 10:17:52 +0000 UTC - event for failure-2: {kubelet bootstrap-e2e-minion-group-td9f} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Nov 25 10:17:56.610: INFO: At 2022-11-25 10:17:53 +0000 UTC - event for failure-2: {kubelet bootstrap-e2e-minion-group-td9f} Started: Started container failure-2 Nov 25 10:17:56.610: INFO: At 2022-11-25 10:17:53 +0000 UTC - event for failure-2: {kubelet bootstrap-e2e-minion-group-td9f} Created: Created container failure-2 Nov 25 10:17:56.676: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 10:17:56.676: INFO: failure-2 bootstrap-e2e-minion-group-td9f Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:17:51 +0000 UTC }] Nov 25 10:17:56.676: INFO: Nov 25 10:17:56.751: INFO: Unable to fetch kubectl-6689/failure-2/failure-2 logs: an error on the server ("unknown") has prevented the request from succeeding (get pods failure-2) Nov 25 10:17:56.813: INFO: Logging node info for node bootstrap-e2e-master Nov 25 10:17:56.868: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master dac6ac53-9eda-43da-8ee2-1e385d8ec898 11071 0 2022-11-25 09:53:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:53:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-25 10:15:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:15:50 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:15:50 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:15:50 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:15:50 +0000 UTC,LastTransitionTime:2022-11-25 09:53:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.98.143,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:11ab6410c54044f1117cbba631ecc365,SystemUUID:11ab6410-c540-44f1-117c-bba631ecc365,BootID:82f19f55-2c1d-4c12-9a9e-fdf6887a2587,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:124989753,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 10:17:56.868: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 10:17:56.926: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 10:17:57.005: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 10:17:57.005: INFO: Logging node info for node bootstrap-e2e-minion-group-428h Nov 25 10:17:57.060: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-428h d5c337a2-2f90-47b5-836f-933da2f9ab7a 12107 0 2022-11-25 09:53:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-428h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-428h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8555":"bootstrap-e2e-minion-group-428h","csi-hostpath-volumemode-9948":"bootstrap-e2e-minion-group-428h"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:13:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:17:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-428h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:17:11 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:17:11 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:17:11 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:17:11 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.189.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8a57494a55998ff6d63e085d45cef3a9,SystemUUID:8a57494a-5599-8ff6-d63e-085d45cef3a9,BootID:558e03ee-b7f3-4498-b6b7-84a1fb7e3be2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1,DevicePath:,},},Config:nil,},} Nov 25 10:17:57.060: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-428h Nov 25 10:17:57.132: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-428h Nov 25 10:17:57.228: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-428h: error trying to reach service: No agent available Nov 25 10:17:57.228: INFO: Logging node info for node bootstrap-e2e-minion-group-n625 Nov 25 10:17:57.288: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-n625 85bcae04-a2ad-40a3-baef-1b591ac3c738 12317 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-n625 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-n625 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-7599":"bootstrap-e2e-minion-group-n625","csi-hostpath-multivolume-829":"bootstrap-e2e-minion-group-n625","csi-hostpath-provisioning-6576":"bootstrap-e2e-minion-group-n625","csi-mock-csi-mock-volumes-4433":"bootstrap-e2e-minion-group-n625","csi-mock-csi-mock-volumes-7357":"bootstrap-e2e-minion-group-n625"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 10:13:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 10:17:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 10:17:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-n625,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:13:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:17:54 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:17:54 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:17:54 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:17:54 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.164.225,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a6751b5059a4fe1e837e7af02658d36,SystemUUID:5a6751b5-059a-4fe1-e837-e7af02658d36,BootID:a086eb9d-d388-4c24-8f02-05f416ec4095,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-829^613c9e8d-6caa-11ed-bd8a-92bf1c27657c,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-4433^6b975645-6caa-11ed-8363-1a862bfe1832,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935,DevicePath:,},},Config:nil,},} Nov 25 10:17:57.288: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n625 Nov 25 10:17:57.360: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n625 Nov 25 10:17:57.457: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-n625: error trying to reach service: No agent available Nov 25 10:17:57.457: INFO: Logging node info for node bootstrap-e2e-minion-group-td9f Nov 25 10:17:57.528: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-td9f 4cd357b6-cf3c-48d5-8c26-a8f1a5cdc500 12028 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-td9f kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-td9f topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2475":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-3992":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-9429":"bootstrap-e2e-minion-group-td9f"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:13:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:17:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-td9f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:13:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:17:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:17:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:17:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:17:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.199.116.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5aab76a7447d43699b2ac1ec202d798a,SystemUUID:5aab76a7-447d-4369-9b2a-c1ec202d798a,BootID:e5508047-abaf-4470-affa-3bd94a2a0e77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24 kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},},Config:nil,},} Nov 25 10:17:57.528: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-td9f Nov 25 10:17:57.595: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-td9f Nov 25 10:17:57.691: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-td9f: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-6689" for this suite. 11/25/22 10:17:57.692
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never\,\sbut\swith\s\-\-rm$'
test/e2e/kubectl/kubectl.go:580 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.7.6() test/e2e/kubectl/kubectl.go:580 +0x36afrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 09:54:56.182 Nov 25 09:54:56.182: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/25/22 09:54:56.184 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 09:54:56.352 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 09:54:56.435 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/25/22 09:54:56.517 Nov 25 09:54:56.518: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2154 create -f -' Nov 25 09:54:57.088: INFO: stderr: "" Nov 25 09:54:57.088: INFO: stdout: "pod/httpd created\n" Nov 25 09:54:57.088: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 25 09:54:57.088: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-2154" to be "running and ready" Nov 25 09:54:57.137: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 48.942467ms Nov 25 09:54:57.137: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 09:54:59.256: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168660263s Nov 25 09:54:59.256: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 09:55:01.204: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116544451s Nov 25 09:55:01.204: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 09:55:03.212: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124814744s Nov 25 09:55:03.212: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 09:55:05.194: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106809685s Nov 25 09:55:05.194: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 09:55:07.240: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.152654084s Nov 25 09:55:07.240: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-n625' to be 'Running' but was 'Pending' Nov 25 09:55:09.248: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.160134068s Nov 25 09:55:09.248: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC }] Nov 25 09:55:11.256: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.168013738s Nov 25 09:55:11.256: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC }] Nov 25 09:55:13.203: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.115332061s Nov 25 09:55:13.203: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC }] Nov 25 09:55:15.371: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.283342067s Nov 25 09:55:15.371: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC }] Nov 25 09:55:17.234: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.14589807s Nov 25 09:55:17.234: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC }] Nov 25 09:55:19.287: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.199250678s Nov 25 09:55:19.287: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-n625' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:54:57 +0000 UTC }] Nov 25 09:55:21.192: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 24.104152826s Nov 25 09:55:21.192: INFO: Pod "httpd" satisfied condition "running and ready" Nov 25 09:55:21.192: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] [Slow] running a failing command without --restart=Never, but with --rm test/e2e/kubectl/kubectl.go:571 Nov 25 09:55:21.192: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2154 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=OnFailure --rm --pod-running-timeout=2m0s failure-3 -- /bin/sh -c cat && exit 42' Nov 25 09:55:41.769: INFO: rc: 1 Nov 25 09:55:41.769: FAIL: Missing expected 'timed out' error, got: exec.CodeExitError{Err:(*errors.errorString)(0xc001288860), Code:1} Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.7.6() test/e2e/kubectl/kubectl.go:580 +0x36a [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/25/22 09:55:41.77 Nov 25 09:55:41.770: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2154 delete --grace-period=0 --force -f -' Nov 25 09:55:42.068: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 25 09:55:42.068: INFO: stdout: "pod \"httpd\" force deleted\n" Nov 25 09:55:42.068: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2154 get rc,svc -l name=httpd --no-headers' Nov 25 09:55:42.315: INFO: stderr: "No resources found in kubectl-2154 namespace.\n" Nov 25 09:55:42.315: INFO: stdout: "" Nov 25 09:55:42.315: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2154 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 25 09:55:42.512: INFO: stderr: "" Nov 25 09:55:42.512: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 25 09:55:42.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 09:55:42.571 STEP: Collecting events from namespace "kubectl-2154". 11/25/22 09:55:42.571 STEP: Found 13 events. 11/25/22 09:55:44.191 Nov 25 09:55:44.191: INFO: At 2022-11-25 09:54:57 +0000 UTC - event for httpd: {default-scheduler } Scheduled: Successfully assigned kubectl-2154/httpd to bootstrap-e2e-minion-group-n625 Nov 25 09:55:44.191: INFO: At 2022-11-25 09:54:58 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-n625} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Nov 25 09:55:44.191: INFO: At 2022-11-25 09:55:04 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-n625} Started: Started container httpd Nov 25 09:55:44.191: INFO: At 2022-11-25 09:55:04 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-n625} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 5.895791049s (5.895811116s including waiting) Nov 25 09:55:44.191: INFO: At 2022-11-25 09:55:04 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-n625} Created: Created container httpd Nov 25 09:55:44.191: INFO: At 2022-11-25 09:55:07 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-n625} Killing: Stopping container httpd Nov 25 09:55:44.191: INFO: At 2022-11-25 09:55:09 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-n625} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 09:55:44.191: INFO: At 2022-11-25 09:55:10 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-n625} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Nov 25 09:55:44.191: INFO: At 2022-11-25 09:55:21 +0000 UTC - event for failure-3: {default-scheduler } Scheduled: Successfully assigned kubectl-2154/failure-3 to bootstrap-e2e-minion-group-td9f Nov 25 09:55:44.191: INFO: At 2022-11-25 09:55:22 +0000 UTC - event for failure-3: {kubelet bootstrap-e2e-minion-group-td9f} Started: Started container failure-3 Nov 25 09:55:44.191: INFO: At 2022-11-25 09:55:22 +0000 UTC - event for failure-3: {kubelet bootstrap-e2e-minion-group-td9f} Created: Created container failure-3 Nov 25 09:55:44.191: INFO: At 2022-11-25 09:55:22 +0000 UTC - event for failure-3: {kubelet bootstrap-e2e-minion-group-td9f} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Nov 25 09:55:44.191: INFO: At 2022-11-25 09:55:23 +0000 UTC - event for failure-3: {kubelet bootstrap-e2e-minion-group-td9f} Killing: Stopping container failure-3 Nov 25 09:55:44.251: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 09:55:44.251: INFO: failure-3 bootstrap-e2e-minion-group-td9f Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:55:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:55:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:55:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:55:21 +0000 UTC }] Nov 25 09:55:44.251: INFO: Nov 25 09:55:44.724: INFO: Logging node info for node bootstrap-e2e-master Nov 25 09:55:44.834: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master dac6ac53-9eda-43da-8ee2-1e385d8ec898 770 0 2022-11-25 09:53:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:53:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-25 09:54:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 09:54:15 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 09:54:15 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 09:54:15 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 09:54:15 +0000 UTC,LastTransitionTime:2022-11-25 09:53:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.98.143,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:11ab6410c54044f1117cbba631ecc365,SystemUUID:11ab6410-c540-44f1-117c-bba631ecc365,BootID:82f19f55-2c1d-4c12-9a9e-fdf6887a2587,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:124989753,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 09:55:44.834: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 09:55:44.945: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 09:55:45.214: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 09:52:39 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:45.214: INFO: Container etcd-container ready: true, restart count 1 Nov 25 09:55:45.214: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 09:52:57 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:45.214: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 25 09:55:45.214: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 09:52:57 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:45.214: INFO: Container l7-lb-controller ready: true, restart count 2 Nov 25 09:55:45.214: INFO: metadata-proxy-v0.1-z25qg started at 2022-11-25 09:53:48 +0000 UTC (0+2 container statuses recorded) Nov 25 09:55:45.214: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 09:55:45.214: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 09:55:45.214: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 09:52:39 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:45.214: INFO: Container konnectivity-server-container ready: true, restart count 2 Nov 25 09:55:45.214: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 09:52:39 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:45.214: INFO: Container kube-scheduler ready: true, restart count 0 Nov 25 09:55:45.214: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 09:52:39 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:45.214: INFO: Container etcd-container ready: true, restart count 0 Nov 25 09:55:45.214: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 09:52:39 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:45.214: INFO: Container kube-apiserver ready: true, restart count 0 Nov 25 09:55:45.214: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 09:52:39 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:45.214: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 25 09:55:45.760: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 09:55:45.760: INFO: Logging node info for node bootstrap-e2e-minion-group-428h Nov 25 09:55:45.840: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-428h d5c337a2-2f90-47b5-836f-933da2f9ab7a 2114 0 2022-11-25 09:53:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-428h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-428h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1999":"bootstrap-e2e-minion-group-428h"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 09:53:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:54:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 09:55:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-428h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 09:53:26 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 09:53:26 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 09:53:26 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 09:53:26 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 09:53:26 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 09:53:26 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 09:53:26 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 09:55:24 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 09:55:24 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 09:55:24 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 09:55:24 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.189.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8a57494a55998ff6d63e085d45cef3a9,SystemUUID:8a57494a-5599-8ff6-d63e-085d45cef3a9,BootID:558e03ee-b7f3-4498-b6b7-84a1fb7e3be2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1,DevicePath:,},},Config:nil,},} Nov 25 09:55:45.841: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-428h Nov 25 09:55:45.916: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-428h Nov 25 09:55:46.094: INFO: coredns-6d97d5ddb-k646d started at 2022-11-25 09:53:27 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.094: INFO: Container coredns ready: true, restart count 2 Nov 25 09:55:46.094: INFO: konnectivity-agent-srgs2 started at 2022-11-25 09:53:27 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.094: INFO: Container konnectivity-agent ready: false, restart count 2 Nov 25 09:55:46.094: INFO: coredns-6d97d5ddb-fjb9w started at 2022-11-25 09:53:33 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.094: INFO: Container coredns ready: false, restart count 2 Nov 25 09:55:46.094: INFO: csi-mockplugin-0 started at 2022-11-25 09:54:57 +0000 UTC (0+4 container statuses recorded) Nov 25 09:55:46.094: INFO: Container busybox ready: true, restart count 1 Nov 25 09:55:46.094: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 09:55:46.094: INFO: Container driver-registrar ready: true, restart count 1 Nov 25 09:55:46.094: INFO: Container mock ready: true, restart count 1 Nov 25 09:55:46.094: INFO: kube-proxy-bootstrap-e2e-minion-group-428h started at 2022-11-25 09:53:21 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.094: INFO: Container kube-proxy ready: true, restart count 1 Nov 25 09:55:46.094: INFO: csi-hostpathplugin-0 started at 2022-11-25 09:54:43 +0000 UTC (0+7 container statuses recorded) Nov 25 09:55:46.094: INFO: Container csi-attacher ready: true, restart count 3 Nov 25 09:55:46.094: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 09:55:46.094: INFO: Container csi-resizer ready: true, restart count 3 Nov 25 09:55:46.094: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 25 09:55:46.094: INFO: Container hostpath ready: true, restart count 3 Nov 25 09:55:46.094: INFO: Container liveness-probe ready: true, restart count 3 Nov 25 09:55:46.094: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 25 09:55:46.094: INFO: pod-54d61945-f7ca-4c83-b4f5-489589755091 started at 2022-11-25 09:55:01 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.094: INFO: Container write-pod ready: false, restart count 0 Nov 25 09:55:46.094: INFO: metadata-proxy-v0.1-fg9tk started at 2022-11-25 09:53:22 +0000 UTC (0+2 container statuses recorded) Nov 25 09:55:46.094: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 09:55:46.094: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 09:55:46.094: INFO: kube-dns-autoscaler-5f6455f985-gvgsn started at 2022-11-25 09:53:27 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.094: INFO: Container autoscaler ready: false, restart count 2 Nov 25 09:55:46.094: INFO: l7-default-backend-8549d69d99-f9sx9 started at 2022-11-25 09:53:27 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.094: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 09:55:46.094: INFO: hostpath-injector started at 2022-11-25 09:54:53 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.094: INFO: Container hostpath-injector ready: false, restart count 0 Nov 25 09:55:46.094: INFO: volume-snapshot-controller-0 started at 2022-11-25 09:53:27 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.094: INFO: Container volume-snapshot-controller ready: false, restart count 2 Nov 25 09:55:46.094: INFO: hostexec-bootstrap-e2e-minion-group-428h-mlmt8 started at 2022-11-25 09:54:41 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.094: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 09:55:46.670: INFO: Latency metrics for node bootstrap-e2e-minion-group-428h Nov 25 09:55:46.670: INFO: Logging node info for node bootstrap-e2e-minion-group-n625 Nov 25 09:55:46.725: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-n625 85bcae04-a2ad-40a3-baef-1b591ac3c738 1977 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-n625 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-n625 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumemode-623":"bootstrap-e2e-minion-group-n625","csi-mock-csi-mock-volumes-3071":"csi-mock-csi-mock-volumes-3071"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 09:53:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:55:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 09:55:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-n625,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 09:53:33 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 09:53:33 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 09:53:33 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 09:53:33 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 09:53:33 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 09:53:33 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 09:53:33 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 09:55:31 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 09:55:31 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 09:55:31 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 09:55:31 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.164.225,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a6751b5059a4fe1e837e7af02658d36,SystemUUID:5a6751b5-059a-4fe1-e837-e7af02658d36,BootID:a086eb9d-d388-4c24-8f02-05f416ec4095,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 09:55:46.726: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n625 Nov 25 09:55:46.792: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n625 Nov 25 09:55:46.900: INFO: metadata-proxy-v0.1-j55mq started at 2022-11-25 09:53:29 +0000 UTC (0+2 container statuses recorded) Nov 25 09:55:46.900: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 09:55:46.900: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 09:55:46.900: INFO: konnectivity-agent-l5wh2 started at 2022-11-25 09:53:41 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.900: INFO: Container konnectivity-agent ready: true, restart count 2 Nov 25 09:55:46.900: INFO: metrics-server-v0.5.2-867b8754b9-c8gh8 started at 2022-11-25 09:53:51 +0000 UTC (0+2 container statuses recorded) Nov 25 09:55:46.900: INFO: Container metrics-server ready: false, restart count 1 Nov 25 09:55:46.900: INFO: Container metrics-server-nanny ready: false, restart count 2 Nov 25 09:55:46.900: INFO: hostexec-bootstrap-e2e-minion-group-n625-p64cp started at 2022-11-25 09:55:45 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.900: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 09:55:46.900: INFO: kube-proxy-bootstrap-e2e-minion-group-n625 started at 2022-11-25 09:53:28 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.900: INFO: Container kube-proxy ready: true, restart count 3 Nov 25 09:55:46.900: INFO: hostpath-io-client started at 2022-11-25 09:55:40 +0000 UTC (1+1 container statuses recorded) Nov 25 09:55:46.900: INFO: Init container hostpath-io-init ready: true, restart count 0 Nov 25 09:55:46.900: INFO: Container hostpath-io-client ready: true, restart count 0 Nov 25 09:55:46.900: INFO: back-off-cap started at 2022-11-25 09:54:55 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.900: INFO: Container back-off-cap ready: false, restart count 2 Nov 25 09:55:46.900: INFO: pod-configmaps-5305fe98-a8c7-4910-b0c7-bb934f141c12 started at 2022-11-25 09:54:41 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.900: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 09:55:46.900: INFO: csi-hostpathplugin-0 started at 2022-11-25 09:54:58 +0000 UTC (0+7 container statuses recorded) Nov 25 09:55:46.900: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 09:55:46.900: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 09:55:46.900: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 09:55:46.900: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 09:55:46.900: INFO: Container hostpath ready: true, restart count 2 Nov 25 09:55:46.900: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 09:55:46.900: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 09:55:46.900: INFO: csi-mockplugin-0 started at 2022-11-25 09:54:43 +0000 UTC (0+4 container statuses recorded) Nov 25 09:55:46.900: INFO: Container busybox ready: true, restart count 0 Nov 25 09:55:46.900: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 09:55:46.900: INFO: Container driver-registrar ready: true, restart count 0 Nov 25 09:55:46.900: INFO: Container mock ready: true, restart count 0 Nov 25 09:55:46.900: INFO: csi-mockplugin-0 started at 2022-11-25 09:54:57 +0000 UTC (0+4 container statuses recorded) Nov 25 09:55:46.900: INFO: Container busybox ready: true, restart count 1 Nov 25 09:55:46.900: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 09:55:46.900: INFO: Container driver-registrar ready: true, restart count 1 Nov 25 09:55:46.900: INFO: Container mock ready: true, restart count 1 Nov 25 09:55:46.900: INFO: pvc-volume-tester-k4q22 started at 2022-11-25 09:55:12 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:46.900: INFO: Container volume-tester ready: false, restart count 0 Nov 25 09:55:47.344: INFO: Latency metrics for node bootstrap-e2e-minion-group-n625 Nov 25 09:55:47.344: INFO: Logging node info for node bootstrap-e2e-minion-group-td9f Nov 25 09:55:47.432: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-td9f 4cd357b6-cf3c-48d5-8c26-a8f1a5cdc500 1516 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-td9f kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 09:53:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:53:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 09:55:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-td9f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 09:53:32 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 09:53:32 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 09:53:32 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 09:53:32 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 09:53:32 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 09:53:32 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 09:53:32 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 09:55:00 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 09:55:00 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 09:55:00 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 09:55:00 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.199.116.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5aab76a7447d43699b2ac1ec202d798a,SystemUUID:5aab76a7-447d-4369-9b2a-c1ec202d798a,BootID:e5508047-abaf-4470-affa-3bd94a2a0e77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 09:55:47.432: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-td9f Nov 25 09:55:47.530: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-td9f Nov 25 09:55:47.793: INFO: pod-5da2b91a-85fc-45db-a15a-ad1a1c5e0e85 started at 2022-11-25 09:54:42 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container write-pod ready: false, restart count 0 Nov 25 09:55:47.793: INFO: kube-proxy-bootstrap-e2e-minion-group-td9f started at 2022-11-25 09:53:28 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container kube-proxy ready: true, restart count 2 Nov 25 09:55:47.793: INFO: hostexec-bootstrap-e2e-minion-group-td9f-hrd94 started at 2022-11-25 09:54:41 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 09:55:47.793: INFO: pod-secrets-e8d0cdd6-7d92-494d-be25-753966587480 started at 2022-11-25 09:54:41 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container creates-volume-test ready: false, restart count 0 Nov 25 09:55:47.793: INFO: var-expansion-ec2ccd5e-8b03-4f88-aa33-ac1e05967ca8 started at 2022-11-25 09:55:33 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container dapi-container ready: true, restart count 0 Nov 25 09:55:47.793: INFO: test-hostpath-type-h75wf started at 2022-11-25 09:55:40 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 09:55:47.793: INFO: hostexec-bootstrap-e2e-minion-group-td9f-c96vt started at 2022-11-25 09:54:41 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 09:55:47.793: INFO: hostexec-bootstrap-e2e-minion-group-td9f-x9q9d started at 2022-11-25 09:54:41 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 09:55:47.793: INFO: failure-3 started at 2022-11-25 09:55:21 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container failure-3 ready: true, restart count 0 Nov 25 09:55:47.793: INFO: test-hostpath-type-zsx6n started at 2022-11-25 09:55:17 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container host-path-testing ready: true, restart count 0 Nov 25 09:55:47.793: INFO: test-hostpath-type-8sg5f started at 2022-11-25 09:55:26 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 09:55:47.793: INFO: var-expansion-052bd198-186c-4376-a370-d68adcc7c233 started at 2022-11-25 09:54:41 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container dapi-container ready: false, restart count 0 Nov 25 09:55:47.793: INFO: pod-7c1395d6-ff5b-4568-b21a-d2862076e20c started at 2022-11-25 09:54:55 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container write-pod ready: false, restart count 0 Nov 25 09:55:47.793: INFO: metadata-proxy-v0.1-wx89j started at 2022-11-25 09:53:29 +0000 UTC (0+2 container statuses recorded) Nov 25 09:55:47.793: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 09:55:47.793: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 09:55:47.793: INFO: konnectivity-agent-hnbdq started at 2022-11-25 09:53:41 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container konnectivity-agent ready: true, restart count 2 Nov 25 09:55:47.793: INFO: hostexec-bootstrap-e2e-minion-group-td9f-wgr8b started at 2022-11-25 09:55:45 +0000 UTC (0+1 container statuses recorded) Nov 25 09:55:47.793: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 09:55:48.339: INFO: Latency metrics for node bootstrap-e2e-minion-group-td9f [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-2154" for this suite. 11/25/22 09:55:48.339
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\shandle\supdates\sto\sExternalTrafficPolicy\sfield$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001186000) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:10:48.818 Nov 25 10:10:48.818: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 10:10:48.82 Nov 25 10:12:48.881: INFO: Unexpected error: <*fmt.wrapError | 0xc001322060>: { msg: "wait for service account \"default\" in namespace \"esipp-7648\": timed out waiting for the condition", err: <*errors.errorString | 0xc0001d39b0>{ s: "timed out waiting for the condition", }, } Nov 25 10:12:48.881: FAIL: wait for service account "default" in namespace "esipp-7648": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001186000) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 10:12:48.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:12:48.952 STEP: Collecting events from namespace "esipp-7648". 11/25/22 10:12:48.952 STEP: Found 0 events. 11/25/22 10:12:48.995 Nov 25 10:12:49.101: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 10:12:49.101: INFO: Nov 25 10:12:49.151: INFO: Logging node info for node bootstrap-e2e-master Nov 25 10:12:51.632: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master dac6ac53-9eda-43da-8ee2-1e385d8ec898 10287 0 2022-11-25 09:53:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:53:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-25 10:10:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.98.143,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:11ab6410c54044f1117cbba631ecc365,SystemUUID:11ab6410-c540-44f1-117c-bba631ecc365,BootID:82f19f55-2c1d-4c12-9a9e-fdf6887a2587,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:124989753,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 10:12:51.633: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 10:12:51.682: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 10:12:51.732: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 10:12:51.732: INFO: Logging node info for node bootstrap-e2e-minion-group-428h Nov 25 10:12:51.775: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-428h d5c337a2-2f90-47b5-836f-933da2f9ab7a 10567 0 2022-11-25 09:53:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-428h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-428h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1340":"bootstrap-e2e-minion-group-428h"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:08:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:12:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-428h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.189.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8a57494a55998ff6d63e085d45cef3a9,SystemUUID:8a57494a-5599-8ff6-d63e-085d45cef3a9,BootID:558e03ee-b7f3-4498-b6b7-84a1fb7e3be2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1,DevicePath:,},},Config:nil,},} Nov 25 10:12:51.776: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-428h Nov 25 10:12:51.824: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-428h Nov 25 10:12:51.868: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-428h: error trying to reach service: No agent available Nov 25 10:12:51.868: INFO: Logging node info for node bootstrap-e2e-minion-group-n625 Nov 25 10:12:51.910: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-n625 85bcae04-a2ad-40a3-baef-1b591ac3c738 10219 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-n625 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-n625 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-7599":"bootstrap-e2e-minion-group-n625","csi-mock-csi-mock-volumes-7357":"bootstrap-e2e-minion-group-n625"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:08:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-n625,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.164.225,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a6751b5059a4fe1e837e7af02658d36,SystemUUID:5a6751b5-059a-4fe1-e837-e7af02658d36,BootID:a086eb9d-d388-4c24-8f02-05f416ec4095,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935,DevicePath:,},},Config:nil,},} Nov 25 10:12:51.911: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n625 Nov 25 10:12:51.983: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n625 Nov 25 10:12:52.091: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-n625: error trying to reach service: No agent available Nov 25 10:12:52.091: INFO: Logging node info for node bootstrap-e2e-minion-group-td9f Nov 25 10:12:52.184: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-td9f 4cd357b6-cf3c-48d5-8c26-a8f1a5cdc500 10547 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-td9f kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-td9f topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2475":"bootstrap-e2e-minion-group-td9f","csi-hostpath-multivolume-9001":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-3992":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-9429":"bootstrap-e2e-minion-group-td9f","csi-mock-csi-mock-volumes-5186":"bootstrap-e2e-minion-group-td9f"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:08:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:12:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-td9f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.199.116.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5aab76a7447d43699b2ac1ec202d798a,SystemUUID:5aab76a7-447d-4369-9b2a-c1ec202d798a,BootID:e5508047-abaf-4470-affa-3bd94a2a0e77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24 kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},},Config:nil,},} Nov 25 10:12:52.184: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-td9f Nov 25 10:12:52.273: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-td9f Nov 25 10:12:52.329: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-td9f: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-7648" for this suite. 11/25/22 10:12:52.33
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\sonly\starget\snodes\swith\sendpoints$'
test/e2e/framework/network/utils.go:834 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 There were additional failures detected after the initial failure: [FAILED] Nov 25 10:03:40.707: Couldn't delete ns: "esipp-9860": Delete "https://35.230.98.143/api/v1/namespaces/esipp-9860": dial tcp 35.230.98.143:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.98.143/api/v1/namespaces/esipp-9860", Err:(*net.OpError)(0xc000e05810)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 09:54:40.603 Nov 25 09:54:40.603: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 09:54:40.604 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 09:54:40.745 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 09:54:40.835 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should only target nodes with endpoints test/e2e/network/loadbalancer.go:1346 STEP: creating a service esipp-9860/external-local-nodes with type=LoadBalancer 11/25/22 09:54:41.287 STEP: setting ExternalTrafficPolicy=Local 11/25/22 09:54:41.287 STEP: waiting for loadbalancer for service esipp-9860/external-local-nodes 11/25/22 09:54:41.578 Nov 25 09:54:41.578: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: waiting for loadbalancer for service esipp-9860/external-local-nodes 11/25/22 09:56:07.712 Nov 25 09:56:07.712: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: Performing setup for networking test in namespace esipp-9860 11/25/22 09:56:07.795 STEP: creating a selector 11/25/22 09:56:07.795 STEP: Creating the service pods in kubernetes 11/25/22 09:56:07.795 Nov 25 09:56:07.795: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 09:56:08.137: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-9860" to be "running and ready" Nov 25 09:56:08.217: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 79.994337ms Nov 25 09:56:08.217: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 09:56:10.270: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132518334s Nov 25 09:56:10.270: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 09:56:12.284: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147017523s Nov 25 09:56:12.284: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 09:56:14.281: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.143970066s Nov 25 09:56:14.281: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:16.278: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.14089795s Nov 25 09:56:16.278: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:18.284: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.147311522s Nov 25 09:56:18.284: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:20.300: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.162763989s Nov 25 09:56:20.300: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:22.280: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.143049623s Nov 25 09:56:22.280: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:24.280: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.142899764s Nov 25 09:56:24.280: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:26.296: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.159012909s Nov 25 09:56:26.296: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:28.276: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.138940418s Nov 25 09:56:28.276: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:30.344: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.207128288s Nov 25 09:56:30.344: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:32.278: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.140521454s Nov 25 09:56:32.278: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:34.287: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.150177511s Nov 25 09:56:34.287: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:36.298: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.16133373s Nov 25 09:56:36.298: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:38.271: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.133511575s Nov 25 09:56:38.271: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:40.435: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.297613182s Nov 25 09:56:40.435: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:42.279: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.14233461s Nov 25 09:56:42.279: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:44.282: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.144828773s Nov 25 09:56:44.282: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:46.274: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.136970474s Nov 25 09:56:46.274: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:48.280: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.143102613s Nov 25 09:56:48.280: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:52.972: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.835193492s Nov 25 09:56:52.972: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:54.280: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.142536074s Nov 25 09:56:54.280: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:56.518: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.381180933s Nov 25 09:56:56.518: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:56:58.286: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.149406099s Nov 25 09:56:58.287: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:00.286: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.148769199s Nov 25 09:57:00.286: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:02.283: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.146264887s Nov 25 09:57:02.283: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:04.289: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.152347624s Nov 25 09:57:04.289: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:06.278: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.140436582s Nov 25 09:57:06.278: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:08.270: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.133361941s Nov 25 09:57:08.270: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:10.291: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.153848323s Nov 25 09:57:10.291: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:12.301: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.163615088s Nov 25 09:57:12.301: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:14.369: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.232262972s Nov 25 09:57:14.369: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:16.281: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.14343007s Nov 25 09:57:16.281: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:18.281: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.143734789s Nov 25 09:57:18.281: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:20.298: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.161073755s Nov 25 09:57:20.298: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:22.268: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.130984839s Nov 25 09:57:22.268: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:24.285: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.147648613s Nov 25 09:57:24.285: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:26.278: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.140424886s Nov 25 09:57:26.278: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:28.280: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.142803988s Nov 25 09:57:28.280: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:30.281: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.143526801s Nov 25 09:57:30.281: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:32.285: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.14838836s Nov 25 09:57:32.285: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:34.282: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.144766975s Nov 25 09:57:34.282: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:36.282: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.14464867s Nov 25 09:57:36.282: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:38.312: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.174780096s Nov 25 09:57:38.312: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:40.283: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.145481851s Nov 25 09:57:40.283: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:42.333: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.195960592s Nov 25 09:57:42.333: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:44.317: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.180228797s Nov 25 09:57:44.317: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:46.308: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.170968156s Nov 25 09:57:46.308: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:48.289: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.15141824s Nov 25 09:57:48.289: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:50.305: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.168094398s Nov 25 09:57:50.305: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:52.268: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.131279772s Nov 25 09:57:52.268: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:54.280: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.143383446s Nov 25 09:57:54.280: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:56.294: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.15694324s Nov 25 09:57:56.294: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:57:58.280: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.142417506s Nov 25 09:57:58.280: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:58:00.285: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.148250196s Nov 25 09:58:00.285: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:58:02.278: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.140417521s Nov 25 09:58:02.278: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:58:04.346: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.208439467s Nov 25 09:58:04.346: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:58:06.352: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.214654554s Nov 25 09:58:06.352: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:58:08.275: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.13758601s Nov 25 09:58:08.275: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 09:58:10.315: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 2m2.177770378s Nov 25 09:58:10.315: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 25 09:58:10.315: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 25 09:58:10.361: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-9860" to be "running and ready" Nov 25 09:58:10.453: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 92.469367ms Nov 25 09:58:10.453: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:12.521: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.160392449s Nov 25 09:58:12.521: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:14.536: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.175128656s Nov 25 09:58:14.536: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:16.522: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.161113227s Nov 25 09:58:16.522: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:18.573: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.212124871s Nov 25 09:58:18.573: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:20.511: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 10.149953245s Nov 25 09:58:20.511: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:22.515: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 12.153919929s Nov 25 09:58:22.515: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:24.507: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 14.146205534s Nov 25 09:58:24.507: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:26.522: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 16.161094399s Nov 25 09:58:26.522: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:28.526: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 18.165304067s Nov 25 09:58:28.526: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:30.527: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 20.166600307s Nov 25 09:58:30.527: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:32.507: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 22.146579039s Nov 25 09:58:32.507: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:34.527: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 24.166191784s Nov 25 09:58:34.527: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:36.516: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 26.155354246s Nov 25 09:58:36.516: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:38.616: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 28.255159243s Nov 25 09:58:38.616: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:40.523: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 30.162229584s Nov 25 09:58:40.523: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:42.522: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 32.161198976s Nov 25 09:58:42.522: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:44.624: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 34.263165463s Nov 25 09:58:44.624: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:46.543: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 36.182719469s Nov 25 09:58:46.543: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:48.555: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 38.194309839s Nov 25 09:58:48.555: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:50.628: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 40.267358056s Nov 25 09:58:50.628: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:52.513: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 42.152634062s Nov 25 09:58:52.513: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:54.525: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 44.164149149s Nov 25 09:58:54.525: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:56.530: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 46.169347207s Nov 25 09:58:56.530: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:58:58.525: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 48.164728588s Nov 25 09:58:58.525: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:59:00.611: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 50.25006389s Nov 25 09:59:00.611: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:59:02.516: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 52.155267551s Nov 25 09:59:02.516: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:59:04.605: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 54.244566186s Nov 25 09:59:04.605: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:59:06.520: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 56.159053714s Nov 25 09:59:06.520: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:59:08.526: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 58.165874538s Nov 25 09:59:08.527: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 09:59:10.520: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 1m0.159169588s Nov 25 09:59:10.520: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 25 09:59:10.520: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 25 09:59:10.589: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-9860" to be "running and ready" Nov 25 09:59:10.690: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 100.670493ms Nov 25 09:59:10.690: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 09:59:12.743: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2.153958537s Nov 25 09:59:12.743: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 09:59:14.846: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 4.257142201s Nov 25 09:59:14.846: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 09:59:16.753: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 6.163625241s Nov 25 09:59:16.753: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m0.549s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 3m33.356s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 141 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000b72780, 0xc0042ece00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0001a2380, 0xc0042ece00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003630000?}, 0xc0042ece00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003630000, 0xc0042ece00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc00431afc0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc003854660, 0xc0042ecd00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0038814c0, 0xc0042ecc00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0042ecc00, {0x7fad100, 0xc0038814c0}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc003854690, 0xc0042ecc00, {0x7f88fd808108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc003854690, 0xc0042ecc00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0042eca00, {0x7fe0bc8, 0xc0000820e0}, 0xc85c607bd74fd313?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0042eca00, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc0018b8c40, {0x7fe0bc8, 0xc0000820e0}, {0xc004300e13, 0xb}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition.func1() test/e2e/framework/pod/wait.go:291 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xae75720?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0005c5080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00371d5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc0036bc4e0}, {0xc0039da040, 0xa}, {0xc004300e13, 0xb}, {0x75ee704, 0x11}, 0xc0018d2f30?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc0036bc4e0?}, {0xc004300e13?, 0x0?}, {0xc0039da040?, 0x0?}, 0xc0042d35c0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011ccd20, {0x75c6f7c, 0x9}, 0xc0042d4b70) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011ccd20, 0x7f88c45da528?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b72f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 09:59:42.955: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 32.366176811s Nov 25 09:59:42.956: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 09:59:44.734: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 34.14466492s Nov 25 09:59:44.734: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 09:59:46.738: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 36.148735054s Nov 25 09:59:46.738: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 09:59:48.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 38.143604474s Nov 25 09:59:48.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 09:59:50.757: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 40.167829513s Nov 25 09:59:50.757: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 09:59:52.736: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 42.147101428s Nov 25 09:59:52.736: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 09:59:54.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 44.142767105s Nov 25 09:59:54.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 09:59:56.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 46.142471548s Nov 25 09:59:56.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 09:59:58.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 48.142554247s Nov 25 09:59:58.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:00.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 50.143552203s Nov 25 10:00:00.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m20.551s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m20.004s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 3m53.359s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0005c5080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00371d5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc0036bc4e0}, {0xc0039da040, 0xa}, {0xc004300e13, 0xb}, {0x75ee704, 0x11}, 0xc0018d2f30?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc0036bc4e0?}, {0xc004300e13?, 0x0?}, {0xc0039da040?, 0x0?}, 0xc0042d35c0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011ccd20, {0x75c6f7c, 0x9}, 0xc0042d4b70) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011ccd20, 0x7f88c45da528?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b72f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:00:02.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 52.143650008s Nov 25 10:00:02.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:04.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 54.142699481s Nov 25 10:00:04.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:06.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 56.143440159s Nov 25 10:00:06.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:08.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 58.143488962s Nov 25 10:00:08.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:10.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.142890678s Nov 25 10:00:10.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:12.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.143393058s Nov 25 10:00:12.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:14.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.142791973s Nov 25 10:00:14.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:16.769: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.179344404s Nov 25 10:00:16.769: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:18.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.142945988s Nov 25 10:00:18.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:20.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.142972133s Nov 25 10:00:20.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m40.554s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m40.006s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 4m13.361s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0005c5080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00371d5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc0036bc4e0}, {0xc0039da040, 0xa}, {0xc004300e13, 0xb}, {0x75ee704, 0x11}, 0xc0018d2f30?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc0036bc4e0?}, {0xc004300e13?, 0x0?}, {0xc0039da040?, 0x0?}, 0xc0042d35c0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011ccd20, {0x75c6f7c, 0x9}, 0xc0042d4b70) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011ccd20, 0x7f88c45da528?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b72f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:00:22.749: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.159954185s Nov 25 10:00:22.749: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:25.030: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.440764703s Nov 25 10:00:25.030: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:26.770: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.180589875s Nov 25 10:00:26.770: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:28.748: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.158755264s Nov 25 10:00:28.748: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:30.742: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.152740104s Nov 25 10:00:30.742: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:32.752: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.162802233s Nov 25 10:00:32.752: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:34.802: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.213147599s Nov 25 10:00:34.802: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:36.753: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.163291921s Nov 25 10:00:36.753: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:38.751: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.161558339s Nov 25 10:00:38.751: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:40.857: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.268022885s Nov 25 10:00:40.857: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 6m0.556s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m0.008s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 4m33.364s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0005c5080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00371d5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc0036bc4e0}, {0xc0039da040, 0xa}, {0xc004300e13, 0xb}, {0x75ee704, 0x11}, 0xc0018d2f30?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc0036bc4e0?}, {0xc004300e13?, 0x0?}, {0xc0039da040?, 0x0?}, 0xc0042d35c0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011ccd20, {0x75c6f7c, 0x9}, 0xc0042d4b70) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011ccd20, 0x7f88c45da528?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b72f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:00:42.755: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.16578391s Nov 25 10:00:42.755: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:44.833: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.243605909s Nov 25 10:00:44.833: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:46.773: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.183842862s Nov 25 10:00:46.773: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:48.749: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.159553958s Nov 25 10:00:48.749: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:50.766: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.176913712s Nov 25 10:00:50.766: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:52.767: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.177364376s Nov 25 10:00:52.767: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:54.806: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.216636489s Nov 25 10:00:54.806: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:56.754: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.164370345s Nov 25 10:00:56.754: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:00:58.800: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.210940416s Nov 25 10:00:58.800: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:00.735: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.145536677s Nov 25 10:01:00.735: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 6m20.559s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m20.011s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 4m53.366s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0005c5080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00371d5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc0036bc4e0}, {0xc0039da040, 0xa}, {0xc004300e13, 0xb}, {0x75ee704, 0x11}, 0xc0018d2f30?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc0036bc4e0?}, {0xc004300e13?, 0x0?}, {0xc0039da040?, 0x0?}, 0xc0042d35c0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011ccd20, {0x75c6f7c, 0x9}, 0xc0042d4b70) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011ccd20, 0x7f88c45da528?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b72f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:01:02.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.14390085s Nov 25 10:01:02.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:04.748: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.158442258s Nov 25 10:01:04.748: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:06.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.14376572s Nov 25 10:01:06.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:08.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.143100044s Nov 25 10:01:08.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:10.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.142751891s Nov 25 10:01:10.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:12.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.142444449s Nov 25 10:01:12.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:14.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.143028133s Nov 25 10:01:14.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:16.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.143288077s Nov 25 10:01:16.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:18.734: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.145050694s Nov 25 10:01:18.734: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:20.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.143171287s Nov 25 10:01:20.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 6m40.56s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m40.012s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 5m13.368s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0005c5080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00371d5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc0036bc4e0}, {0xc0039da040, 0xa}, {0xc004300e13, 0xb}, {0x75ee704, 0x11}, 0xc0018d2f30?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc0036bc4e0?}, {0xc004300e13?, 0x0?}, {0xc0039da040?, 0x0?}, 0xc0042d35c0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011ccd20, {0x75c6f7c, 0x9}, 0xc0042d4b70) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011ccd20, 0x7f88c45da528?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b72f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:01:22.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.142687908s Nov 25 10:01:22.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:24.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.143277316s Nov 25 10:01:24.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:26.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.14263429s Nov 25 10:01:26.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:28.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.143416501s Nov 25 10:01:28.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:30.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.142618145s Nov 25 10:01:30.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:32.739: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.149417535s Nov 25 10:01:32.739: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:34.739: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.149206643s Nov 25 10:01:34.739: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:36.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.142605795s Nov 25 10:01:36.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:38.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.143734706s Nov 25 10:01:38.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:40.734: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.145079896s Nov 25 10:01:40.734: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m0.562s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m0.014s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 5m33.37s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0005c5080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00371d5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc0036bc4e0}, {0xc0039da040, 0xa}, {0xc004300e13, 0xb}, {0x75ee704, 0x11}, 0xc0018d2f30?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc0036bc4e0?}, {0xc004300e13?, 0x0?}, {0xc0039da040?, 0x0?}, 0xc0042d35c0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011ccd20, {0x75c6f7c, 0x9}, 0xc0042d4b70) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011ccd20, 0x7f88c45da528?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b72f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:01:42.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.143616339s Nov 25 10:01:42.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:44.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.143854559s Nov 25 10:01:44.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:46.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.142702086s Nov 25 10:01:46.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:48.736: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.146414901s Nov 25 10:01:48.736: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:50.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.143194926s Nov 25 10:01:50.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:52.734: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.144419868s Nov 25 10:01:52.734: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:54.745: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.155785017s Nov 25 10:01:54.745: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:56.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.142944134s Nov 25 10:01:56.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:01:58.733: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.143897529s Nov 25 10:01:58.733: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:00.735: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.145903686s Nov 25 10:02:00.735: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m20.565s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m20.017s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 5m53.372s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0005c5080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00371d5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc0036bc4e0}, {0xc0039da040, 0xa}, {0xc004300e13, 0xb}, {0x75ee704, 0x11}, 0xc0018d2f30?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc0036bc4e0?}, {0xc004300e13?, 0x0?}, {0xc0039da040?, 0x0?}, 0xc0042d35c0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011ccd20, {0x75c6f7c, 0x9}, 0xc0042d4b70) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011ccd20, 0x7f88c45da528?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b72f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:02:02.732: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.142648112s Nov 25 10:02:02.732: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:04.736: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.147097377s Nov 25 10:02:04.736: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:06.768: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.178814118s Nov 25 10:02:06.768: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:08.734: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.144626862s Nov 25 10:02:08.734: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:10.760: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.170906386s Nov 25 10:02:10.760: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:12.760: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.170335581s Nov 25 10:02:12.760: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:14.833: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.243219418s Nov 25 10:02:14.833: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:16.761: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.171732682s Nov 25 10:02:16.761: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:18.765: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.175966249s Nov 25 10:02:18.765: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:20.754: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.164485065s Nov 25 10:02:20.754: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m40.567s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m40.02s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 6m13.375s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0005c5080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00371d5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc0036bc4e0}, {0xc0039da040, 0xa}, {0xc004300e13, 0xb}, {0x75ee704, 0x11}, 0xc0018d2f30?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc0036bc4e0?}, {0xc004300e13?, 0x0?}, {0xc0039da040?, 0x0?}, 0xc0042d35c0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011ccd20, {0x75c6f7c, 0x9}, 0xc0042d4b70) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011ccd20, 0x7f88c45da528?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b72f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:02:22.773: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.184020315s Nov 25 10:02:22.773: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:24.792: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.20268556s Nov 25 10:02:24.792: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:26.755: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.166092054s Nov 25 10:02:26.755: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:28.757: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.168189239s Nov 25 10:02:28.758: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:30.804: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.21456489s Nov 25 10:02:30.804: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:32.751: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.16168681s Nov 25 10:02:32.751: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:34.811: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.222144429s Nov 25 10:02:34.811: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:36.741: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.151740171s Nov 25 10:02:36.741: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:38.861: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.27211323s Nov 25 10:02:38.861: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:40.765: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.175472388s Nov 25 10:02:40.765: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m0.57s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m0.022s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 6m33.378s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0005c5080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00371d5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc0036bc4e0}, {0xc0039da040, 0xa}, {0xc004300e13, 0xb}, {0x75ee704, 0x11}, 0xc0018d2f30?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc0036bc4e0?}, {0xc004300e13?, 0x0?}, {0xc0039da040?, 0x0?}, 0xc0042d35c0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011ccd20, {0x75c6f7c, 0x9}, 0xc0042d4b70) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011ccd20, 0x7f88c45da528?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b72f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:02:42.775: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.185849724s Nov 25 10:02:42.775: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:44.755: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.165673406s Nov 25 10:02:44.755: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:46.785: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.195428003s Nov 25 10:02:46.785: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:48.755: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.166003702s Nov 25 10:02:48.755: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 10:02:50.787: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 3m40.197253213s Nov 25 10:02:50.787: INFO: The phase of Pod netserver-2 is Running (Ready = true) Nov 25 10:02:50.787: INFO: Pod "netserver-2" satisfied condition "running and ready" STEP: Creating test pods 11/25/22 10:02:50.845 Nov 25 10:02:51.050: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "esipp-9860" to be "running" Nov 25 10:02:51.159: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 109.548446ms Nov 25 10:02:53.281: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231320074s Nov 25 10:02:55.209: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158870378s Nov 25 10:02:57.207: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.157594777s Nov 25 10:02:57.207: INFO: Pod "test-container-pod" satisfied condition "running" Nov 25 10:02:57.251: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 STEP: Getting node addresses 11/25/22 10:02:57.251 Nov 25 10:02:57.251: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes 11/25/22 10:02:57.343 Nov 25 10:02:57.442: INFO: Service node-port-service in namespace esipp-9860 found. Nov 25 10:02:57.588: INFO: Service session-affinity-service in namespace esipp-9860 found. STEP: Waiting for NodePort service to expose endpoint 11/25/22 10:02:57.629 Nov 25 10:02:58.634: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:02:59.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:00.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m20.573s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m20.025s) test/e2e/network/loadbalancer.go:1346 At [By Step] Waiting for NodePort service to expose endpoint (Step Runtime: 3.546s) test/e2e/framework/network/utils.go:832 Spec Goroutine goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044909d8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x20?, 0x2fd9d05?, 0x38?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x754e980?, 0xc00371db70?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x0?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework.WaitForServiceEndpointsNum({0x801de88?, 0xc0036bc4e0}, {0xc0039da040, 0xa}, {0x75ee1b4, 0x11}, 0x3, 0x0?, 0x7f88fd808f18?) test/e2e/framework/util.go:424 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:833 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b72f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:03:01.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:02.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:03.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:04.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:05.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:06.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:07.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:08.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:09.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:10.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:11.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:12.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:13.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:14.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:15.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:16.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:17.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:18.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:19.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:20.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m40.575s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m40.027s) test/e2e/network/loadbalancer.go:1346 At [By Step] Waiting for NodePort service to expose endpoint (Step Runtime: 23.548s) test/e2e/framework/network/utils.go:832 Spec Goroutine goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044909d8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x20?, 0x2fd9d05?, 0x38?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x754e980?, 0xc00371db70?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x0?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework.WaitForServiceEndpointsNum({0x801de88?, 0xc0036bc4e0}, {0xc0039da040, 0xa}, {0x75ee1b4, 0x11}, 0x3, 0x0?, 0x7f88fd808f18?) test/e2e/framework/util.go:424 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:833 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b72f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:03:21.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:22.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:23.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:24.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:25.629: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:26.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:27.630: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:27.672: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:03:27.713: INFO: Unexpected error: failed to validate endpoints for service node-port-service in namespace: esipp-9860: <*errors.errorString | 0xc000195d80>: { s: "timed out waiting for the condition", } Nov 25 10:03:27.713: FAIL: failed to validate endpoints for service node-port-service in namespace: esipp-9860: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011ccd20, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000c02000, {0x0, 0x0, 0xc0010b5540?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 Nov 25 10:03:27.822: INFO: Waiting up to 15m0s for service "external-local-nodes" to have no LoadBalancer [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 10:03:38.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 10:03:38.195: INFO: Output of kubectl describe svc: Nov 25 10:03:38.195: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=esipp-9860 describe svc --namespace=esipp-9860' Nov 25 10:03:38.956: INFO: stderr: "" Nov 25 10:03:38.956: INFO: stdout: "Name: external-local-nodes\nNamespace: esipp-9860\nLabels: testid=external-local-nodes-3c6f0a08-3c08-4116-945b-dc55ddf58d45\nAnnotations: <none>\nSelector: testid=external-local-nodes-3c6f0a08-3c08-4116-945b-dc55ddf58d45\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.193.227\nIPs: 10.0.193.227\nPort: <unset> 8081/TCP\nTargetPort: 80/TCP\nEndpoints: <none>\nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 8m22s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 7m32s service-controller Ensured load balancer\n Normal EnsuringLoadBalancer 3m21s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 3m17s service-controller Ensured load balancer\n Normal EnsuringLoadBalancer 93s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 89s service-controller Ensured load balancer\n\n\nName: node-port-service\nNamespace: esipp-9860\nLabels: <none>\nAnnotations: <none>\nSelector: selector-37df59c6-97a4-4bfe-9dc1-2c59f7aa5ca0=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.105.141\nIPs: 10.0.105.141\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 32066/TCP\nEndpoints: <none>\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 30699/UDP\nEndpoints: <none>\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: <none>\n\n\nName: session-affinity-service\nNamespace: esipp-9860\nLabels: <none>\nAnnotations: <none>\nSelector: selector-37df59c6-97a4-4bfe-9dc1-2c59f7aa5ca0=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.140.72\nIPs: 10.0.140.72\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 32300/TCP\nEndpoints: <none>\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 31482/UDP\nEndpoints: <none>\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents: <none>\n" Nov 25 10:03:38.956: INFO: Name: external-local-nodes Namespace: esipp-9860 Labels: testid=external-local-nodes-3c6f0a08-3c08-4116-945b-dc55ddf58d45 Annotations: <none> Selector: testid=external-local-nodes-3c6f0a08-3c08-4116-945b-dc55ddf58d45 Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.193.227 IPs: 10.0.193.227 Port: <unset> 8081/TCP TargetPort: 80/TCP Endpoints: <none> Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 8m22s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 7m32s service-controller Ensured load balancer Normal EnsuringLoadBalancer 3m21s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 3m17s service-controller Ensured load balancer Normal EnsuringLoadBalancer 93s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 89s service-controller Ensured load balancer Name: node-port-service Namespace: esipp-9860 Labels: <none> Annotations: <none> Selector: selector-37df59c6-97a4-4bfe-9dc1-2c59f7aa5ca0=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.105.141 IPs: 10.0.105.141 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 32066/TCP Endpoints: <none> Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 30699/UDP Endpoints: <none> Session Affinity: None External Traffic Policy: Cluster Events: <none> Name: session-affinity-service Namespace: esipp-9860 Labels: <none> Annotations: <none> Selector: selector-37df59c6-97a4-4bfe-9dc1-2c59f7aa5ca0=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.140.72 IPs: 10.0.140.72 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 32300/TCP Endpoints: <none> Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 31482/UDP Endpoints: <none> Session Affinity: ClientIP External Traffic Policy: Cluster Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:03:38.957 STEP: Collecting events from namespace "esipp-9860". 11/25/22 10:03:38.957 STEP: Found 34 events. 11/25/22 10:03:39.004 Nov 25 10:03:39.004: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-container-pod: { } Scheduled: Successfully assigned esipp-9860/test-container-pod to bootstrap-e2e-minion-group-n625 Nov 25 10:03:39.004: INFO: At 2022-11-25 09:55:16 +0000 UTC - event for external-local-nodes: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:06 +0000 UTC - event for external-local-nodes: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:08 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned esipp-9860/netserver-0 to bootstrap-e2e-minion-group-428h Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:08 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned esipp-9860/netserver-1 to bootstrap-e2e-minion-group-n625 Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:08 +0000 UTC - event for netserver-2: {default-scheduler } Scheduled: Successfully assigned esipp-9860/netserver-2 to bootstrap-e2e-minion-group-td9f Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:09 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-428h} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:09 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-428h} Created: Created container webserver Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:09 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-428h} Started: Started container webserver Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:09 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-td9f} Started: Started container webserver Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:09 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-td9f} Created: Created container webserver Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:09 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-td9f} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:10 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-n625} Started: Started container webserver Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:10 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-n625} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:10 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-n625} Created: Created container webserver Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:11 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-428h} Killing: Stopping container webserver Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:11 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-n625} Killing: Stopping container webserver Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:11 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-td9f} Killing: Stopping container webserver Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:12 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-428h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:12 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-n625} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:12 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-td9f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:15 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-n625} BackOff: Back-off restarting failed container webserver in pod netserver-1_esipp-9860(6fd9ad06-fcf3-4f5b-8ce0-a4542eda396a) Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:15 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-td9f} BackOff: Back-off restarting failed container webserver in pod netserver-2_esipp-9860(876209fe-0035-4215-a2d3-86f05a2eced3) Nov 25 10:03:39.004: INFO: At 2022-11-25 09:56:16 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-428h} BackOff: Back-off restarting failed container webserver in pod netserver-0_esipp-9860(031a8c72-fab6-4f74-9058-751b6a74ab22) Nov 25 10:03:39.004: INFO: At 2022-11-25 10:00:17 +0000 UTC - event for external-local-nodes: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 10:03:39.004: INFO: At 2022-11-25 10:00:21 +0000 UTC - event for external-local-nodes: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 25 10:03:39.004: INFO: At 2022-11-25 10:02:05 +0000 UTC - event for external-local-nodes: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 10:03:39.004: INFO: At 2022-11-25 10:02:09 +0000 UTC - event for external-local-nodes: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 25 10:03:39.004: INFO: At 2022-11-25 10:02:52 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-n625} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 10:03:39.004: INFO: At 2022-11-25 10:02:52 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-n625} Created: Created container webserver Nov 25 10:03:39.004: INFO: At 2022-11-25 10:02:52 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-n625} Started: Started container webserver Nov 25 10:03:39.004: INFO: At 2022-11-25 10:02:52 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-n625} Killing: Stopping container webserver Nov 25 10:03:39.004: INFO: At 2022-11-25 10:02:53 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-n625} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 10:03:39.004: INFO: At 2022-11-25 10:02:57 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-n625} BackOff: Back-off restarting failed container webserver in pod test-container-pod_esipp-9860(45b5bbe7-ad7c-4f3e-862e-8ff8546da448) Nov 25 10:03:39.051: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 10:03:39.051: INFO: netserver-0 bootstrap-e2e-minion-group-428h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:00:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:56:08 +0000 UTC }] Nov 25 10:03:39.051: INFO: netserver-1 bootstrap-e2e-minion-group-n625 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:03:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:03:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:56:08 +0000 UTC }] Nov 25 10:03:39.051: INFO: netserver-2 bootstrap-e2e-minion-group-td9f Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:56:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:02:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:02:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 09:56:08 +0000 UTC }] Nov 25 10:03:39.051: INFO: test-container-pod bootstrap-e2e-minion-group-n625 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:02:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:03:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:03:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:02:51 +0000 UTC }] Nov 25 10:03:39.051: INFO: Nov 25 10:03:39.477: INFO: Logging node info for node bootstrap-e2e-master Nov 25 10:03:39.520: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master dac6ac53-9eda-43da-8ee2-1e385d8ec898 5134 0 2022-11-25 09:53:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:53:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-25 09:59:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 09:59:46 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 09:59:46 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 09:59:46 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 09:59:46 +0000 UTC,LastTransitionTime:2022-11-25 09:53:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.98.143,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:11ab6410c54044f1117cbba631ecc365,SystemUUID:11ab6410-c540-44f1-117c-bba631ecc365,BootID:82f19f55-2c1d-4c12-9a9e-fdf6887a2587,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:124989753,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 10:03:39.520: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 10:03:39.598: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 10:03:39.666: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 09:52:39 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:39.666: INFO: Container konnectivity-server-container ready: true, restart count 3 Nov 25 10:03:39.666: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 09:52:39 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:39.666: INFO: Container kube-scheduler ready: true, restart count 2 Nov 25 10:03:39.666: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 09:52:39 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:39.666: INFO: Container etcd-container ready: true, restart count 1 Nov 25 10:03:39.666: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 09:52:57 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:39.666: INFO: Container kube-addon-manager ready: true, restart count 3 Nov 25 10:03:39.666: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 09:52:57 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:39.666: INFO: Container l7-lb-controller ready: true, restart count 3 Nov 25 10:03:39.666: INFO: metadata-proxy-v0.1-z25qg started at 2022-11-25 09:53:48 +0000 UTC (0+2 container statuses recorded) Nov 25 10:03:39.666: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 10:03:39.666: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 10:03:39.666: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 09:52:39 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:39.666: INFO: Container kube-apiserver ready: true, restart count 0 Nov 25 10:03:39.666: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 09:52:39 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:39.666: INFO: Container kube-controller-manager ready: false, restart count 3 Nov 25 10:03:39.666: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 09:52:39 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:39.666: INFO: Container etcd-container ready: true, restart count 2 Nov 25 10:03:40.032: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 10:03:40.032: INFO: Logging node info for node bootstrap-e2e-minion-group-428h Nov 25 10:03:40.075: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-428h d5c337a2-2f90-47b5-836f-933da2f9ab7a 6885 0 2022-11-25 09:53:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-428h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-428h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 09:57:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 10:02:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {node-problem-detector Update v1 2022-11-25 10:03:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-428h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:03:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:03:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:03:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:03:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:03:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:03:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:03:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:02:54 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:02:54 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:02:54 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:02:54 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.189.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8a57494a55998ff6d63e085d45cef3a9,SystemUUID:8a57494a-5599-8ff6-d63e-085d45cef3a9,BootID:558e03ee-b7f3-4498-b6b7-84a1fb7e3be2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1,DevicePath:,},},Config:nil,},} Nov 25 10:03:40.076: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-428h Nov 25 10:03:40.126: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-428h Nov 25 10:03:40.391: INFO: volume-snapshot-controller-0 started at 2022-11-25 09:53:27 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:40.391: INFO: Container volume-snapshot-controller ready: true, restart count 5 Nov 25 10:03:40.391: INFO: hostexec-bootstrap-e2e-minion-group-428h-hhv5t started at 2022-11-25 10:02:23 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:40.391: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 10:03:40.391: INFO: hostpath-injector started at 2022-11-25 09:54:53 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:40.391: INFO: Container hostpath-injector ready: false, restart count 0 Nov 25 10:03:40.391: INFO: coredns-6d97d5ddb-fjb9w started at 2022-11-25 09:53:33 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:40.392: INFO: Container coredns ready: false, restart count 6 Nov 25 10:03:40.392: INFO: csi-mockplugin-0 started at 2022-11-25 09:54:57 +0000 UTC (0+4 container statuses recorded) Nov 25 10:03:40.392: INFO: Container busybox ready: true, restart count 5 Nov 25 10:03:40.392: INFO: Container csi-provisioner ready: false, restart count 4 Nov 25 10:03:40.392: INFO: Container driver-registrar ready: false, restart count 4 Nov 25 10:03:40.392: INFO: Container mock ready: false, restart count 4 Nov 25 10:03:40.392: INFO: kube-proxy-bootstrap-e2e-minion-group-428h started at 2022-11-25 09:53:21 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:40.392: INFO: Container kube-proxy ready: true, restart count 5 Nov 25 10:03:40.392: INFO: coredns-6d97d5ddb-k646d started at 2022-11-25 09:53:27 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:40.392: INFO: Container coredns ready: false, restart count 6 Nov 25 10:03:40.392: INFO: konnectivity-agent-srgs2 started at 2022-11-25 09:53:27 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:40.392: INFO: Container konnectivity-agent ready: true, restart count 5 Nov 25 10:03:40.392: INFO: metadata-proxy-v0.1-fg9tk started at 2022-11-25 09:53:22 +0000 UTC (0+2 container statuses recorded) Nov 25 10:03:40.392: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 10:03:40.392: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 10:03:40.392: INFO: execpod-drop9z9vs started at 2022-11-25 10:02:35 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:40.392: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 10:03:40.392: INFO: csi-hostpathplugin-0 started at 2022-11-25 09:54:43 +0000 UTC (0+7 container statuses recorded) Nov 25 10:03:40.392: INFO: Container csi-attacher ready: false, restart count 6 Nov 25 10:03:40.392: INFO: Container csi-provisioner ready: false, restart count 6 Nov 25 10:03:40.392: INFO: Container csi-resizer ready: false, restart count 6 Nov 25 10:03:40.392: INFO: Container csi-snapshotter ready: false, restart count 6 Nov 25 10:03:40.392: INFO: Container hostpath ready: false, restart count 6 Nov 25 10:03:40.392: INFO: Container liveness-probe ready: false, restart count 6 Nov 25 10:03:40.392: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 25 10:03:40.392: INFO: netserver-0 started at 2022-11-25 09:56:08 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:40.392: INFO: Container webserver ready: false, restart count 5 Nov 25 10:03:40.392: INFO: kube-dns-autoscaler-5f6455f985-gvgsn started at 2022-11-25 09:53:27 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:40.392: INFO: Container autoscaler ready: false, restart count 5 Nov 25 10:03:40.392: INFO: l7-default-backend-8549d69d99-f9sx9 started at 2022-11-25 09:53:27 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:40.392: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 10:03:40.392: INFO: external-provisioner-hsxql started at 2022-11-25 10:02:43 +0000 UTC (0+1 container statuses recorded) Nov 25 10:03:40.392: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 25 10:03:40.431: INFO: Logging node info for node bootstrap-e2e-minion-group-n625 Nov 25 10:03:40.470: INFO: Error getting node info Get "https://35.230.98.143/api/v1/nodes/bootstrap-e2e-minion-group-n625": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:03:40.470: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 10:03:40.471: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n625 Nov 25 10:03:40.510: INFO: Unexpected error retrieving node events Get "https://35.230.98.143/api/v1/namespaces/kube-system/events?fieldSelector=involvedObject.kind%3DNode%2CinvolvedObject.namespace%3D%2Csource%3Dkubelet%2CinvolvedObject.name%3Dbootstrap-e2e-minion-group-n625": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:03:40.510: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n625 Nov 25 10:03:40.549: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-n625: Get "https://35.230.98.143/api/v1/nodes/bootstrap-e2e-minion-group-n625:10250/proxy/pods": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:03:40.549: INFO: Logging node info for node bootstrap-e2e-minion-group-td9f Nov 25 10:03:40.588: INFO: Error getting node info Get "https://35.230.98.143/api/v1/nodes/bootstrap-e2e-minion-group-td9f": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:03:40.588: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 10:03:40.589: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-td9f Nov 25 10:03:40.628: INFO: Unexpected error retrieving node events Get "https://35.230.98.143/api/v1/namespaces/kube-system/events?fieldSelector=involvedObject.kind%3DNode%2CinvolvedObject.namespace%3D%2Csource%3Dkubelet%2CinvolvedObject.name%3Dbootstrap-e2e-minion-group-td9f": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:03:40.628: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-td9f Nov 25 10:03:40.667: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-td9f: Get "https://35.230.98.143/api/v1/nodes/bootstrap-e2e-minion-group-td9f:10250/proxy/pods": dial tcp 35.230.98.143:443: connect: connection refused [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-9860" for this suite. 11/25/22 10:03:40.667 Nov 25 10:03:40.707: FAIL: Couldn't delete ns: "esipp-9860": Delete "https://35.230.98.143/api/v1/namespaces/esipp-9860": dial tcp 35.230.98.143:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.98.143/api/v1/namespaces/esipp-9860", Err:(*net.OpError)(0xc000e05810)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000c02000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00105abb0?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00105abb0?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=LoadBalancer$'
test/e2e/framework/network/utils.go:834 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0008ce380, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000b56000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 +0x10a k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 +0x37ffrom junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:17:50.037 Nov 25 10:17:50.037: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 10:17:50.039 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 10:17:50.522 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 10:17:50.646 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=LoadBalancer test/e2e/network/loadbalancer.go:1266 STEP: creating a service esipp-6493/external-local-lb with type=LoadBalancer 11/25/22 10:17:50.942 STEP: setting ExternalTrafficPolicy=Local 11/25/22 10:17:50.943 STEP: waiting for loadbalancer for service esipp-6493/external-local-lb 11/25/22 10:17:51.142 Nov 25 10:17:51.142: INFO: Waiting up to 15m0s for service "external-local-lb" to have a LoadBalancer STEP: creating a pod to be part of the service external-local-lb 11/25/22 10:18:51.36 Nov 25 10:18:51.471: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 10:18:51.559: INFO: Found all 1 pods Nov 25 10:18:51.559: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-lb-twl5x] Nov 25 10:18:51.559: INFO: Waiting up to 2m0s for pod "external-local-lb-twl5x" in namespace "esipp-6493" to be "running and ready" Nov 25 10:18:51.635: INFO: Pod "external-local-lb-twl5x": Phase="Pending", Reason="", readiness=false. Elapsed: 75.487046ms Nov 25 10:18:51.635: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-twl5x' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 10:18:53.693: INFO: Pod "external-local-lb-twl5x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134206482s Nov 25 10:18:53.693: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-twl5x' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 10:18:55.726: INFO: Pod "external-local-lb-twl5x": Phase="Running", Reason="", readiness=true. Elapsed: 4.166619394s Nov 25 10:18:55.726: INFO: Pod "external-local-lb-twl5x" satisfied condition "running and ready" Nov 25 10:18:55.726: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-lb-twl5x] STEP: waiting for loadbalancer for service esipp-6493/external-local-lb 11/25/22 10:18:55.726 Nov 25 10:18:55.726: INFO: Waiting up to 15m0s for service "external-local-lb" to have a LoadBalancer STEP: reading clientIP using the TCP service's service port via its external VIP 11/25/22 10:18:55.879 Nov 25 10:18:55.879: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:18:55.919: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:18:57.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:18:57.959: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:18:59.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:18:59.960: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:19:01.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:19:11.920: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:19:13.919: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:19:23.919: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:19:25.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:19:25.960: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:19:27.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:19:37.920: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:19:39.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:19:39.961: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:19:41.919: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:19:51.920: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:19:51.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:19:51.960: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:19:53.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:19:53.960: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:19:55.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:20:05.921: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:20:07.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:20:17.920: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:20:19.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:20:29.921: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:20:31.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:20:31.960: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:20:33.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:20:43.920: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:20:45.919: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:20:55.920: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:20:57.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:07.921: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:21:09.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:19.921: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:21:21.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:21.961: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:21:23.919: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:23.960: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:21:25.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:35.920: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:21:35.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:35.960: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:21:37.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:37.960: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:21:39.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:49.921: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:21:51.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:51.960: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:21:53.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:53.960: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:21:55.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:55.960: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:21:57.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:57.959: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:21:59.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:21:59.959: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:22:01.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:22:11.921: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:22:13.919: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:22:23.920: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:22:23.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:22:23.960: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": dial tcp 34.105.112.215:80: connect: connection refused Nov 25 10:22:25.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:22:35.921: INFO: Poke("http://34.105.112.215:80/clientip"): Get "http://34.105.112.215:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 10:22:37.920: INFO: Poking "http://34.105.112.215:80/clientip" Nov 25 10:22:38.000: INFO: Poke("http://34.105.112.215:80/clientip"): success Nov 25 10:22:38.000: INFO: ClientIP detected by target pod using VIP:SvcPort is 35.188.115.87:56056 STEP: checking if Source IP is preserved 11/25/22 10:22:38 Nov 25 10:22:38.269: INFO: Waiting up to 15m0s for service "external-local-lb" to have no LoadBalancer STEP: Performing setup for networking test in namespace esipp-6493 11/25/22 10:22:49.802 STEP: creating a selector 11/25/22 10:22:49.802 STEP: Creating the service pods in kubernetes 11/25/22 10:22:49.802 Nov 25 10:22:49.802: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 10:22:50.033: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-6493" to be "running and ready" Nov 25 10:22:50.075: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 42.160841ms Nov 25 10:22:50.075: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 5m0.906s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1266 At [By Step] Creating the service pods in kubernetes (Step Runtime: 1.141s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 2931 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0007ffef0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x70?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0x75b521a?, 0xc0008f75c0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000848b60}, {0xc002f9f460, 0xa}, {0xc002fc70f0, 0xb}, {0x75ee704, 0x11}, 0x7f8f401?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000848b60?}, {0xc002fc70f0?, 0xc0032dd9a0?}, {0xc002f9f460?, 0xc0008f7808?}, 0x271e5fe?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0008ce380, {0x75c6f7c, 0x9}, 0xc0047804e0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0008ce380, 0x7f42ce76ef50?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0008ce380, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000b56000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002830d80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:22:52.119: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.085377638s Nov 25 10:22:52.119: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 10:22:54.119: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.08551219s Nov 25 10:22:54.119: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 10:22:56.119: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.085803002s Nov 25 10:22:56.119: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 10:22:58.119: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.086159978s Nov 25 10:22:58.119: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 10:23:00.119: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.085705074s Nov 25 10:23:00.119: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 10:23:02.119: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.085812716s Nov 25 10:23:02.119: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 10:23:04.119: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.085859327s Nov 25 10:23:04.119: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 10:23:06.121: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.088063104s Nov 25 10:23:06.121: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 10:23:08.119: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.085293859s Nov 25 10:23:08.119: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 10:23:10.119: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.085792523s Nov 25 10:23:10.119: INFO: The phase of Pod netserver-0 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 5m20.908s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m20.003s) test/e2e/network/loadbalancer.go:1266 At [By Step] Creating the service pods in kubernetes (Step Runtime: 21.143s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 2931 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0007ffef0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x70?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0x75b521a?, 0xc0008f75c0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000848b60}, {0xc002f9f460, 0xa}, {0xc002fc70f0, 0xb}, {0x75ee704, 0x11}, 0x7f8f401?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000848b60?}, {0xc002fc70f0?, 0xc0032dd9a0?}, {0xc002f9f460?, 0xc0008f7808?}, 0x271e5fe?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0008ce380, {0x75c6f7c, 0x9}, 0xc0047804e0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0008ce380, 0x7f42ce76ef50?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0008ce380, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000b56000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002830d80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:23:12.119: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.085731546s Nov 25 10:23:12.119: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 25 10:23:12.119: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 25 10:23:12.162: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-6493" to be "running and ready" Nov 25 10:23:12.208: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 45.56959ms Nov 25 10:23:12.208: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 25 10:23:12.208: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 25 10:23:12.253: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-6493" to be "running and ready" Nov 25 10:23:12.297: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 43.543166ms Nov 25 10:23:12.297: INFO: The phase of Pod netserver-2 is Running (Ready = true) Nov 25 10:23:12.297: INFO: Pod "netserver-2" satisfied condition "running and ready" STEP: Creating test pods 11/25/22 10:23:12.338 Nov 25 10:23:12.411: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "esipp-6493" to be "running" Nov 25 10:23:12.454: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 42.084954ms Nov 25 10:23:14.498: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.086130772s Nov 25 10:23:14.498: INFO: Pod "test-container-pod" satisfied condition "running" Nov 25 10:23:14.539: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 STEP: Getting node addresses 11/25/22 10:23:14.539 Nov 25 10:23:14.539: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes 11/25/22 10:23:14.633 Nov 25 10:23:14.736: INFO: Service node-port-service in namespace esipp-6493 found. Nov 25 10:23:14.875: INFO: Service session-affinity-service in namespace esipp-6493 found. STEP: Waiting for NodePort service to expose endpoint 11/25/22 10:23:14.917 Nov 25 10:23:15.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:16.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:17.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:18.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:19.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:20.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:21.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:22.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:23.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:24.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:25.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:26.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:27.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:28.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:29.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:30.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 5m40.911s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m40.006s) test/e2e/network/loadbalancer.go:1266 At [By Step] Waiting for NodePort service to expose endpoint (Step Runtime: 16.031s) test/e2e/framework/network/utils.go:832 Spec Goroutine goroutine 2931 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc00091d980, 0xc0011b9300) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc00168f680, 0xc0011b9300, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00355e000?}, 0xc0011b9300?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00355e000, 0xc0011b9300) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc002aebf80?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0037c5560, 0xc0011b9200) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc00358d460, 0xc0011b9100) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0011b9100, {0x7fad100, 0xc00358d460}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0037c5590, 0xc0011b9100, {0x7f4305824108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0037c5590, 0xc0011b9100) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0011b8f00, {0x7fe0bc8, 0xc0001b0008}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0011b8f00, {0x7fe0bc8, 0xc0001b0008}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*endpoints).List(0xc00134e540, {0x7fe0bc8, 0xc0001b0008}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, 0x0}, {0x0, ...}, ...}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/endpoints.go:95 k8s.io/kubernetes/test/e2e/framework.WaitForServiceEndpointsNum.func1() test/e2e/framework/util.go:426 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001b0000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0026f6798, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x8?, 0x2fd9d05?, 0x38?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001b0000}, 0x754e980?, 0xc0008f7b58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x0?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework.WaitForServiceEndpointsNum({0x801de88?, 0xc000848b60}, {0xc002f9f460, 0xa}, {0x75ee1b4, 0x11}, 0x3, 0x0?, 0x7f4305824a68?) test/e2e/framework/util.go:424 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0008ce380, 0x3c?) test/e2e/framework/network/utils.go:833 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000b56000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002830d80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:23:31.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:32.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:33.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:34.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:35.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:36.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:37.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:38.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:39.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:40.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:41.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:42.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:43.917: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:44.918: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:44.999: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 10:23:45.041: INFO: Unexpected error: failed to validate endpoints for service node-port-service in namespace: esipp-6493: <*errors.errorString | 0xc0002419e0>: { s: "timed out waiting for the condition", } Nov 25 10:23:45.041: FAIL: failed to validate endpoints for service node-port-service in namespace: esipp-6493: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0008ce380, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000b56000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 +0x10a k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 +0x37f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 10:23:45.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 10:23:45.094: INFO: Output of kubectl describe svc: Nov 25 10:23:45.094: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.98.143 --kubeconfig=/workspace/.kube/config --namespace=esipp-6493 describe svc --namespace=esipp-6493' Nov 25 10:23:45.702: INFO: stderr: "" Nov 25 10:23:45.702: INFO: stdout: "Name: external-local-lb\nNamespace: esipp-6493\nLabels: testid=external-local-lb-7595e5b9-2cd9-4329-aca4-c81a6b1510ee\nAnnotations: <none>\nSelector: testid=external-local-lb-7595e5b9-2cd9-4329-aca4-c81a6b1510ee\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.29.181\nIPs: 10.0.29.181\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nEndpoints: 10.64.2.208:80\nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 5m32s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 4m55s service-controller Ensured load balancer\n\n\nName: node-port-service\nNamespace: esipp-6493\nLabels: <none>\nAnnotations: <none>\nSelector: selector-3c5981e9-be3c-4c69-b502-ae24c95ff134=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.9.138\nIPs: 10.0.9.138\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 30341/TCP\nEndpoints: <none>\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 30437/UDP\nEndpoints: <none>\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: <none>\n\n\nName: session-affinity-service\nNamespace: esipp-6493\nLabels: <none>\nAnnotations: <none>\nSelector: selector-3c5981e9-be3c-4c69-b502-ae24c95ff134=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.243.167\nIPs: 10.0.243.167\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 31547/TCP\nEndpoints: <none>\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 30400/UDP\nEndpoints: <none>\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents: <none>\n" Nov 25 10:23:45.703: INFO: Name: external-local-lb Namespace: esipp-6493 Labels: testid=external-local-lb-7595e5b9-2cd9-4329-aca4-c81a6b1510ee Annotations: <none> Selector: testid=external-local-lb-7595e5b9-2cd9-4329-aca4-c81a6b1510ee Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.29.181 IPs: 10.0.29.181 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.64.2.208:80 Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 5m32s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 4m55s service-controller Ensured load balancer Name: node-port-service Namespace: esipp-6493 Labels: <none> Annotations: <none> Selector: selector-3c5981e9-be3c-4c69-b502-ae24c95ff134=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.9.138 IPs: 10.0.9.138 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 30341/TCP Endpoints: <none> Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 30437/UDP Endpoints: <none> Session Affinity: None External Traffic Policy: Cluster Events: <none> Name: session-affinity-service Namespace: esipp-6493 Labels: <none> Annotations: <none> Selector: selector-3c5981e9-be3c-4c69-b502-ae24c95ff134=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.243.167 IPs: 10.0.243.167 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 31547/TCP Endpoints: <none> Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 30400/UDP Endpoints: <none> Session Affinity: ClientIP External Traffic Policy: Cluster Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:23:45.703 STEP: Collecting events from namespace "esipp-6493". 11/25/22 10:23:45.703 STEP: Found 32 events. 11/25/22 10:23:45.749 Nov 25 10:23:45.749: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for external-local-lb-twl5x: { } Scheduled: Successfully assigned esipp-6493/external-local-lb-twl5x to bootstrap-e2e-minion-group-td9f Nov 25 10:23:45.749: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned esipp-6493/netserver-0 to bootstrap-e2e-minion-group-428h Nov 25 10:23:45.749: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned esipp-6493/netserver-1 to bootstrap-e2e-minion-group-n625 Nov 25 10:23:45.749: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-2: { } Scheduled: Successfully assigned esipp-6493/netserver-2 to bootstrap-e2e-minion-group-td9f Nov 25 10:23:45.749: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-container-pod: { } Scheduled: Successfully assigned esipp-6493/test-container-pod to bootstrap-e2e-minion-group-td9f Nov 25 10:23:45.749: INFO: At 2022-11-25 10:18:13 +0000 UTC - event for external-local-lb: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 10:23:45.749: INFO: At 2022-11-25 10:18:50 +0000 UTC - event for external-local-lb: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 25 10:23:45.749: INFO: At 2022-11-25 10:18:51 +0000 UTC - event for external-local-lb: {replication-controller } SuccessfulCreate: Created pod: external-local-lb-twl5x Nov 25 10:23:45.749: INFO: At 2022-11-25 10:18:53 +0000 UTC - event for external-local-lb-twl5x: {kubelet bootstrap-e2e-minion-group-td9f} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 10:23:45.749: INFO: At 2022-11-25 10:18:53 +0000 UTC - event for external-local-lb-twl5x: {kubelet bootstrap-e2e-minion-group-td9f} Created: Created container netexec Nov 25 10:23:45.749: INFO: At 2022-11-25 10:18:53 +0000 UTC - event for external-local-lb-twl5x: {kubelet bootstrap-e2e-minion-group-td9f} Started: Started container netexec Nov 25 10:23:45.749: INFO: At 2022-11-25 10:18:55 +0000 UTC - event for external-local-lb-twl5x: {kubelet bootstrap-e2e-minion-group-td9f} Killing: Stopping container netexec Nov 25 10:23:45.749: INFO: At 2022-11-25 10:18:56 +0000 UTC - event for external-local-lb-twl5x: {kubelet bootstrap-e2e-minion-group-td9f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 10:23:45.749: INFO: At 2022-11-25 10:18:57 +0000 UTC - event for external-local-lb-twl5x: {kubelet bootstrap-e2e-minion-group-td9f} Unhealthy: Readiness probe failed: Get "http://10.64.2.206:80/hostName": dial tcp 10.64.2.206:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Nov 25 10:23:45.749: INFO: At 2022-11-25 10:19:00 +0000 UTC - event for external-local-lb-twl5x: {kubelet bootstrap-e2e-minion-group-td9f} BackOff: Back-off restarting failed container netexec in pod external-local-lb-twl5x_esipp-6493(b2784ac4-ad32-4159-9dcf-8e3b6dd9091e) Nov 25 10:23:45.749: INFO: At 2022-11-25 10:19:00 +0000 UTC - event for external-local-lb-twl5x: {kubelet bootstrap-e2e-minion-group-td9f} Unhealthy: Readiness probe failed: Get "http://10.64.2.207:80/hostName": dial tcp 10.64.2.207:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:50 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-428h} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:50 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-428h} Created: Created container webserver Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:50 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-428h} Started: Started container webserver Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:50 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-n625} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:50 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-n625} Created: Created container webserver Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:50 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-n625} Started: Started container webserver Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:50 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-td9f} Created: Created container webserver Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:50 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-td9f} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:50 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-td9f} Started: Started container webserver Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:51 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-td9f} Killing: Stopping container webserver Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:52 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-428h} Killing: Stopping container webserver Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:52 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-td9f} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 10:23:45.749: INFO: At 2022-11-25 10:22:53 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-428h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 10:23:45.749: INFO: At 2022-11-25 10:23:13 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-td9f} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 10:23:45.749: INFO: At 2022-11-25 10:23:13 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-td9f} Created: Created container webserver Nov 25 10:23:45.750: INFO: At 2022-11-25 10:23:13 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-td9f} Started: Started container webserver Nov 25 10:23:45.795: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 10:23:45.795: INFO: external-local-lb-twl5x bootstrap-e2e-minion-group-td9f Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:18:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:19:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:19:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:18:51 +0000 UTC }] Nov 25 10:23:45.795: INFO: netserver-0 bootstrap-e2e-minion-group-428h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:22:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:23:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:23:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:22:49 +0000 UTC }] Nov 25 10:23:45.795: INFO: netserver-1 bootstrap-e2e-minion-group-n625 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:22:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:23:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:23:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:22:49 +0000 UTC }] Nov 25 10:23:45.795: INFO: netserver-2 bootstrap-e2e-minion-group-td9f Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:22:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:23:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:23:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:22:50 +0000 UTC }] Nov 25 10:23:45.795: INFO: test-container-pod bootstrap-e2e-minion-group-td9f Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:23:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:23:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:23:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 10:23:12 +0000 UTC }] Nov 25 10:23:45.795: INFO: Nov 25 10:23:45.841: INFO: Unable to fetch esipp-6493/external-local-lb-twl5x/netexec logs: an error on the server ("unknown") has prevented the request from succeeding (get pods external-local-lb-twl5x) Nov 25 10:23:45.885: INFO: Unable to fetch esipp-6493/netserver-0/webserver logs: an error on the server ("unknown") has prevented the request from succeeding (get pods netserver-0) Nov 25 10:23:45.929: INFO: Unable to fetch esipp-6493/netserver-1/webserver logs: an error on the server ("unknown") has prevented the request from succeeding (get pods netserver-1) Nov 25 10:23:45.972: INFO: Unable to fetch esipp-6493/netserver-2/webserver logs: an error on the server ("unknown") has prevented the request from succeeding (get pods netserver-2) Nov 25 10:23:46.018: INFO: Unable to fetch esipp-6493/test-container-pod/webserver logs: an error on the server ("unknown") has prevented the request from succeeding (get pods test-container-pod) Nov 25 10:23:46.067: INFO: Logging node info for node bootstrap-e2e-master Nov 25 10:23:46.108: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master dac6ac53-9eda-43da-8ee2-1e385d8ec898 13460 0 2022-11-25 09:53:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:53:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-25 10:20:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:20:57 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:20:57 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:20:57 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:20:57 +0000 UTC,LastTransitionTime:2022-11-25 09:53:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.98.143,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:11ab6410c54044f1117cbba631ecc365,SystemUUID:11ab6410-c540-44f1-117c-bba631ecc365,BootID:82f19f55-2c1d-4c12-9a9e-fdf6887a2587,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:124989753,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 10:23:46.109: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 10:23:46.153: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 10:23:46.196: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 10:23:46.196: INFO: Logging node info for node bootstrap-e2e-minion-group-428h Nov 25 10:23:46.239: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-428h d5c337a2-2f90-47b5-836f-933da2f9ab7a 13939 0 2022-11-25 09:53:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-428h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-428h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-589":"bootstrap-e2e-minion-group-428h","csi-hostpath-multivolume-6325":"bootstrap-e2e-minion-group-428h","csi-hostpath-multivolume-8555":"bootstrap-e2e-minion-group-428h","csi-hostpath-provisioning-1340":"bootstrap-e2e-minion-group-428h"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:18:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:23:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:23:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-428h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:23:30 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:23:30 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:23:30 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:23:30 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:23:30 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:23:30 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:23:30 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:18:54 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:18:54 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:18:54 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:18:54 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.189.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8a57494a55998ff6d63e085d45cef3a9,SystemUUID:8a57494a-5599-8ff6-d63e-085d45cef3a9,BootID:558e03ee-b7f3-4498-b6b7-84a1fb7e3be2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1,DevicePath:,},},Config:nil,},} Nov 25 10:23:46.239: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-428h Nov 25 10:23:46.285: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-428h Nov 25 10:23:46.334: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-428h: error trying to reach service: No agent available Nov 25 10:23:46.334: INFO: Logging node info for node bootstrap-e2e-minion-group-n625 Nov 25 10:23:46.375: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-n625 85bcae04-a2ad-40a3-baef-1b591ac3c738 13917 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-n625 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-n625 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5017":"bootstrap-e2e-minion-group-n625","csi-hostpath-multivolume-7599":"bootstrap-e2e-minion-group-n625","csi-hostpath-multivolume-829":"bootstrap-e2e-minion-group-n625","csi-hostpath-provisioning-6576":"bootstrap-e2e-minion-group-n625","csi-mock-csi-mock-volumes-4433":"bootstrap-e2e-minion-group-n625","csi-mock-csi-mock-volumes-7357":"bootstrap-e2e-minion-group-n625"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:18:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:23:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:23:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-n625,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:23:37 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:23:37 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:23:37 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:23:37 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:23:37 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:23:37 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:23:37 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:23:40 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:23:40 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:23:40 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:23:40 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.164.225,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a6751b5059a4fe1e837e7af02658d36,SystemUUID:5a6751b5-059a-4fe1-e837-e7af02658d36,BootID:a086eb9d-d388-4c24-8f02-05f416ec4095,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935,DevicePath:,},},Config:nil,},} Nov 25 10:23:46.376: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n625 Nov 25 10:23:46.420: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n625 Nov 25 10:23:46.463: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-n625: error trying to reach service: No agent available Nov 25 10:23:46.463: INFO: Logging node info for node bootstrap-e2e-minion-group-td9f Nov 25 10:23:46.514: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-td9f 4cd357b6-cf3c-48d5-8c26-a8f1a5cdc500 13922 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-td9f kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-td9f topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2475":"bootstrap-e2e-minion-group-td9f","csi-hostpath-multivolume-9001":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-3992":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-9429":"bootstrap-e2e-minion-group-td9f","csi-mock-csi-mock-volumes-3680":"csi-mock-csi-mock-volumes-3680"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:23:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:23:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-td9f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:23:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:23:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:23:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:23:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:23:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:23:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:23:36 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:23:40 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:23:40 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:23:40 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:23:40 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.199.116.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5aab76a7447d43699b2ac1ec202d798a,SystemUUID:5aab76a7-447d-4369-9b2a-c1ec202d798a,BootID:e5508047-abaf-4470-affa-3bd94a2a0e77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24 kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},},Config:nil,},} Nov 25 10:23:46.514: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-td9f Nov 25 10:23:46.559: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-td9f Nov 25 10:23:46.603: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-td9f: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-6493" for this suite. 11/25/22 10:23:46.603
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=NodePort$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001172000) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:10:17.69 Nov 25 10:10:17.690: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 10:10:17.692 Nov 25 10:10:17.731: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:19.771: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:21.771: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:23.771: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:25.771: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:27.771: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:29.773: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:31.774: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:33.771: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:35.771: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:37.773: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:12:44.882: INFO: Unexpected error: <*fmt.wrapError | 0xc00050a000>: { msg: "wait for service account \"default\" in namespace \"esipp-5007\": timed out waiting for the condition", err: <*errors.errorString | 0xc000205ce0>{ s: "timed out waiting for the condition", }, } Nov 25 10:12:44.882: FAIL: wait for service account "default" in namespace "esipp-5007": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001172000) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 10:12:44.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:12:44.966 STEP: Collecting events from namespace "esipp-5007". 11/25/22 10:12:44.966 STEP: Found 0 events. 11/25/22 10:12:45.009 Nov 25 10:12:45.060: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 10:12:45.060: INFO: Nov 25 10:12:45.112: INFO: Logging node info for node bootstrap-e2e-master Nov 25 10:12:45.159: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master dac6ac53-9eda-43da-8ee2-1e385d8ec898 10287 0 2022-11-25 09:53:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:53:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-25 10:10:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.98.143,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:11ab6410c54044f1117cbba631ecc365,SystemUUID:11ab6410-c540-44f1-117c-bba631ecc365,BootID:82f19f55-2c1d-4c12-9a9e-fdf6887a2587,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:124989753,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 10:12:45.159: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 10:12:45.249: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 10:12:45.529: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 10:12:45.529: INFO: Logging node info for node bootstrap-e2e-minion-group-428h Nov 25 10:12:45.576: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-428h d5c337a2-2f90-47b5-836f-933da2f9ab7a 10567 0 2022-11-25 09:53:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-428h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-428h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1340":"bootstrap-e2e-minion-group-428h"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:08:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:12:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-428h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.189.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8a57494a55998ff6d63e085d45cef3a9,SystemUUID:8a57494a-5599-8ff6-d63e-085d45cef3a9,BootID:558e03ee-b7f3-4498-b6b7-84a1fb7e3be2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1,DevicePath:,},},Config:nil,},} Nov 25 10:12:45.576: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-428h Nov 25 10:12:45.665: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-428h Nov 25 10:12:45.731: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-428h: error trying to reach service: No agent available Nov 25 10:12:45.731: INFO: Logging node info for node bootstrap-e2e-minion-group-n625 Nov 25 10:12:45.774: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-n625 85bcae04-a2ad-40a3-baef-1b591ac3c738 10219 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-n625 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-n625 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-7599":"bootstrap-e2e-minion-group-n625","csi-mock-csi-mock-volumes-7357":"bootstrap-e2e-minion-group-n625"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:08:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-n625,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.164.225,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a6751b5059a4fe1e837e7af02658d36,SystemUUID:5a6751b5-059a-4fe1-e837-e7af02658d36,BootID:a086eb9d-d388-4c24-8f02-05f416ec4095,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935,DevicePath:,},},Config:nil,},} Nov 25 10:12:45.774: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n625 Nov 25 10:12:45.819: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n625 Nov 25 10:12:45.886: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-n625: error trying to reach service: No agent available Nov 25 10:12:45.886: INFO: Logging node info for node bootstrap-e2e-minion-group-td9f Nov 25 10:12:45.929: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-td9f 4cd357b6-cf3c-48d5-8c26-a8f1a5cdc500 10547 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-td9f kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-td9f topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2475":"bootstrap-e2e-minion-group-td9f","csi-hostpath-multivolume-9001":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-3992":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-9429":"bootstrap-e2e-minion-group-td9f","csi-mock-csi-mock-volumes-5186":"bootstrap-e2e-minion-group-td9f"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:08:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:12:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-td9f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.199.116.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5aab76a7447d43699b2ac1ec202d798a,SystemUUID:5aab76a7-447d-4369-9b2a-c1ec202d798a,BootID:e5508047-abaf-4470-affa-3bd94a2a0e77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24 kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},},Config:nil,},} Nov 25 10:12:45.929: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-td9f Nov 25 10:12:45.985: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-td9f Nov 25 10:12:46.031: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-td9f: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-5007" for this suite. 11/25/22 10:12:46.031
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfrom\spods$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d42000) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:10:13.82 Nov 25 10:10:13.820: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 10:10:13.821 Nov 25 10:10:13.861: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:15.901: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:17.901: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:19.900: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:21.901: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:23.901: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:25.901: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:27.900: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:29.900: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:31.900: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:33.901: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:35.901: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:37.901: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:12:43.958: INFO: Unexpected error: <*fmt.wrapError | 0xc00039a1e0>: { msg: "wait for service account \"default\" in namespace \"esipp-4590\": timed out waiting for the condition", err: <*errors.errorString | 0xc0001fda50>{ s: "timed out waiting for the condition", }, } Nov 25 10:12:43.958: FAIL: wait for service account "default" in namespace "esipp-4590": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d42000) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 10:12:43.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:12:44.054 STEP: Collecting events from namespace "esipp-4590". 11/25/22 10:12:44.055 STEP: Found 0 events. 11/25/22 10:12:44.097 Nov 25 10:12:44.137: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 10:12:44.137: INFO: Nov 25 10:12:44.187: INFO: Logging node info for node bootstrap-e2e-master Nov 25 10:12:44.230: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master dac6ac53-9eda-43da-8ee2-1e385d8ec898 10287 0 2022-11-25 09:53:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:53:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-25 10:10:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.98.143,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:11ab6410c54044f1117cbba631ecc365,SystemUUID:11ab6410-c540-44f1-117c-bba631ecc365,BootID:82f19f55-2c1d-4c12-9a9e-fdf6887a2587,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:124989753,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 10:12:44.230: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 10:12:44.277: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 10:12:44.319: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 10:12:44.319: INFO: Logging node info for node bootstrap-e2e-minion-group-428h Nov 25 10:12:44.361: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-428h d5c337a2-2f90-47b5-836f-933da2f9ab7a 10552 0 2022-11-25 09:53:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-428h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-428h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1340":"bootstrap-e2e-minion-group-428h","csi-hostpath-provisioning-9276":"bootstrap-e2e-minion-group-428h"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:08:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:12:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-428h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.189.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8a57494a55998ff6d63e085d45cef3a9,SystemUUID:8a57494a-5599-8ff6-d63e-085d45cef3a9,BootID:558e03ee-b7f3-4498-b6b7-84a1fb7e3be2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1,DevicePath:,},},Config:nil,},} Nov 25 10:12:44.362: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-428h Nov 25 10:12:44.433: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-428h Nov 25 10:12:44.481: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-428h: error trying to reach service: No agent available Nov 25 10:12:44.481: INFO: Logging node info for node bootstrap-e2e-minion-group-n625 Nov 25 10:12:44.526: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-n625 85bcae04-a2ad-40a3-baef-1b591ac3c738 10219 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-n625 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-n625 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-7599":"bootstrap-e2e-minion-group-n625","csi-mock-csi-mock-volumes-7357":"bootstrap-e2e-minion-group-n625"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:08:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-n625,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.164.225,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a6751b5059a4fe1e837e7af02658d36,SystemUUID:5a6751b5-059a-4fe1-e837-e7af02658d36,BootID:a086eb9d-d388-4c24-8f02-05f416ec4095,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935,DevicePath:,},},Config:nil,},} Nov 25 10:12:44.526: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n625 Nov 25 10:12:44.574: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n625 Nov 25 10:12:44.640: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-n625: error trying to reach service: No agent available Nov 25 10:12:44.640: INFO: Logging node info for node bootstrap-e2e-minion-group-td9f Nov 25 10:12:44.682: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-td9f 4cd357b6-cf3c-48d5-8c26-a8f1a5cdc500 10547 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-td9f kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-td9f topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2475":"bootstrap-e2e-minion-group-td9f","csi-hostpath-multivolume-9001":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-3992":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-9429":"bootstrap-e2e-minion-group-td9f","csi-mock-csi-mock-volumes-5186":"bootstrap-e2e-minion-group-td9f"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:08:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:12:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-td9f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.199.116.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5aab76a7447d43699b2ac1ec202d798a,SystemUUID:5aab76a7-447d-4369-9b2a-c1ec202d798a,BootID:e5508047-abaf-4470-affa-3bd94a2a0e77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24 kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},},Config:nil,},} Nov 25 10:12:44.682: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-td9f Nov 25 10:12:44.732: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-td9f Nov 25 10:12:44.778: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-td9f: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-4590" for this suite. 11/25/22 10:12:44.779
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sTCP\sservice\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0010e84b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:10:14.919 Nov 25 10:10:14.919: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 10:10:14.921 Nov 25 10:10:14.960: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:17.000: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:19.000: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:21.001: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:23.000: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:25.000: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:27.000: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:29.000: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:31.000: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:33.000: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:35.000: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:37.000: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:10:39.000: INFO: Unexpected error while creating namespace: Post "https://35.230.98.143/api/v1/namespaces": dial tcp 35.230.98.143:443: connect: connection refused Nov 25 10:12:43.957: INFO: Unexpected error: <*fmt.wrapError | 0xc00040a440>: { msg: "wait for service account \"default\" in namespace \"loadbalancers-4338\": timed out waiting for the condition", err: <*errors.errorString | 0xc000295d70>{ s: "timed out waiting for the condition", }, } Nov 25 10:12:43.958: FAIL: wait for service account "default" in namespace "loadbalancers-4338": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0010e84b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 10:12:43.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 10:12:44.05 STEP: Collecting events from namespace "loadbalancers-4338". 11/25/22 10:12:44.05 STEP: Found 0 events. 11/25/22 10:12:44.094 Nov 25 10:12:44.134: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 10:12:44.134: INFO: Nov 25 10:12:44.183: INFO: Logging node info for node bootstrap-e2e-master Nov 25 10:12:44.225: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master dac6ac53-9eda-43da-8ee2-1e385d8ec898 10287 0 2022-11-25 09:53:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 09:53:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-25 10:10:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:10:43 +0000 UTC,LastTransitionTime:2022-11-25 09:53:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.98.143,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:11ab6410c54044f1117cbba631ecc365,SystemUUID:11ab6410-c540-44f1-117c-bba631ecc365,BootID:82f19f55-2c1d-4c12-9a9e-fdf6887a2587,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:124989753,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 10:12:44.225: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 10:12:44.272: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 10:12:44.315: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 10:12:44.315: INFO: Logging node info for node bootstrap-e2e-minion-group-428h Nov 25 10:12:44.357: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-428h d5c337a2-2f90-47b5-836f-933da2f9ab7a 10552 0 2022-11-25 09:53:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-428h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-428h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1340":"bootstrap-e2e-minion-group-428h","csi-hostpath-provisioning-9276":"bootstrap-e2e-minion-group-428h"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:08:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:12:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-428h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:28 +0000 UTC,LastTransitionTime:2022-11-25 09:53:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:27 +0000 UTC,LastTransitionTime:2022-11-25 09:53:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:12:05 +0000 UTC,LastTransitionTime:2022-11-25 09:53:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.189.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-428h.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8a57494a55998ff6d63e085d45cef3a9,SystemUUID:8a57494a-5599-8ff6-d63e-085d45cef3a9,BootID:558e03ee-b7f3-4498-b6b7-84a1fb7e3be2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1999^33c9f993-6ca7-11ed-8135-664c9a994ae1,DevicePath:,},},Config:nil,},} Nov 25 10:12:44.358: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-428h Nov 25 10:12:44.433: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-428h Nov 25 10:12:44.481: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-428h: error trying to reach service: No agent available Nov 25 10:12:44.481: INFO: Logging node info for node bootstrap-e2e-minion-group-n625 Nov 25 10:12:44.526: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-n625 85bcae04-a2ad-40a3-baef-1b591ac3c738 10219 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-n625 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-n625 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-7599":"bootstrap-e2e-minion-group-n625","csi-mock-csi-mock-volumes-7357":"bootstrap-e2e-minion-group-n625"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:08:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-n625,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:08:01 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.164.225,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-n625.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a6751b5059a4fe1e837e7af02658d36,SystemUUID:5a6751b5-059a-4fe1-e837-e7af02658d36,BootID:a086eb9d-d388-4c24-8f02-05f416ec4095,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-3616^594c9a09-6ca7-11ed-abdc-a6dd5a434935,DevicePath:,},},Config:nil,},} Nov 25 10:12:44.526: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n625 Nov 25 10:12:44.574: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n625 Nov 25 10:12:44.639: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-n625: error trying to reach service: No agent available Nov 25 10:12:44.639: INFO: Logging node info for node bootstrap-e2e-minion-group-td9f Nov 25 10:12:44.682: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-td9f 4cd357b6-cf3c-48d5-8c26-a8f1a5cdc500 10547 0 2022-11-25 09:53:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-td9f kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-td9f topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2475":"bootstrap-e2e-minion-group-td9f","csi-hostpath-multivolume-9001":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-3992":"bootstrap-e2e-minion-group-td9f","csi-hostpath-provisioning-9429":"bootstrap-e2e-minion-group-td9f","csi-mock-csi-mock-volumes-5186":"bootstrap-e2e-minion-group-td9f"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 09:53:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 09:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 10:07:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 10:08:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 10:12:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://gce-cvm-upg-1-3-1-4-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-td9f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 10:08:34 +0000 UTC,LastTransitionTime:2022-11-25 09:53:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 09:53:41 +0000 UTC,LastTransitionTime:2022-11-25 09:53:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 10:12:35 +0000 UTC,LastTransitionTime:2022-11-25 09:53:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.199.116.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-td9f.c.gce-cvm-upg-1-3-1-4-ctl-skew.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5aab76a7447d43699b2ac1ec202d798a,SystemUUID:5aab76a7-447d-4369-9b2a-c1ec202d798a,BootID:e5508047-abaf-4470-affa-3bd94a2a0e77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.46+8f2371bcceff79,KubeProxyVersion:v1.27.0-alpha.0.46+8f2371bcceff79,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.46_8f2371bcceff79],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24 kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^0a52a028-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5186^03952048-6ca8-11ed-bede-ca69049a1c24,DevicePath:,},},Config:nil,},} Nov 25 10:12:44.682: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-td9f Nov 25 10:12:44.732: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-td9f Nov 25 10:12:44.777: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-td9f: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-4338" for this suite. 11/25/22 10:12:44.777
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sUDP\sservice\s\[Slow\]$'
test/e2e/network/loadbalancer.go:458 k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:458 +0x12f2
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 10:06:32.166 Nov 25 10:06:32.166: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 10:06:32.168 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 10:06:32.41 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 10:06:32.504 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to change the type and ports of a UDP service [Slow] test/e2e/network/loadbalancer.go:287 Nov 25 10:06:32.778: INFO: namespace for TCP test: loadbalancers-1089 STEP: creating a UDP service mutability-test with type=ClusterIP in namespace loadbalancers-1089 11/25/22 10:06:32.849 Nov 25 10:06:32.947: INFO: service port UDP: 80 STEP: creating a pod to be part of the UDP service mutability-test 11/25/22 10:06:32.947 Nov 25 10:06:33.064: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 10:06:33.138: INFO: Found all 1 pods Nov 25 10:06:33.138: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [mutability-test-4jbc4] Nov 25 10:06:33.138: INFO: Waiting up to 2m0s for pod "mutability-test-4jbc4" in namespace "loadbalancers-1089" to be "running and ready" Nov 25 10:06:33.220: INFO: Pod "mutability-test-4jbc4": Phase="Pending", Reason="", readiness=false. Elapsed: 82.133532ms Nov 25 10:06:33.220: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-4jbc4' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 10:06:35.392: INFO: Pod "mutability-test-4jbc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254297881s Nov 25 10:06:35.392: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-4jbc4' on 'bootstrap-e2e-minion-group-td9f' to be 'Running' but was 'Pending' Nov 25 10:06:37.269: INFO: Pod "mutability-test-4jbc4": Phase="Running", Reason="", readiness=true. Elapsed: 4.131669055s Nov 25 10:06:37.269: INFO: Pod "mutability-test-4jbc4" satisfied condition "running and ready" Nov 25 10:06:37.269: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [mutability-test-4jbc4] STEP: changing the UDP service to type=NodePort 11/25/22 10:06:37.269 Nov 25 10:06:37.453: INFO: UDP node port: 31768 STEP: hitting the UDP service's NodePort 11/25/22 10:06:37.453 Nov 25 10:06:37.453: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:06:37.493: INFO: Poke("udp://34.168.189.189:31768"): read udp 10.60.203.132:54772->34.168.189.189:31768: read: connection refused Nov 25 10:06:39.494: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:06:39.533: INFO: Poke("udp://34.168.189.189:31768"): read udp 10.60.203.132:50664->34.168.189.189:31768: read: connection refused Nov 25 10:06:41.494: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:06:41.533: INFO: Poke("udp://34.168.189.189:31768"): read udp 10.60.203.132:33903->34.168.189.189:31768: read: connection refused Nov 25 10:06:43.494: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:06:43.533: INFO: Poke("udp://34.168.189.189:31768"): read udp 10.60.203.132:45205->34.168.189.189:31768: read: connection refused Nov 25 10:06:45.493: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:06:45.532: INFO: Poke("udp://34.168.189.189:31768"): read udp 10.60.203.132:47408->34.168.189.189:31768: read: connection refused Nov 25 10:06:47.494: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:06:47.533: INFO: Poke("udp://34.168.189.189:31768"): read udp 10.60.203.132:46640->34.168.189.189:31768: read: connection refused Nov 25 10:06:49.494: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:06:49.533: INFO: Poke("udp://34.168.189.189:31768"): read udp 10.60.203.132:46482->34.168.189.189:31768: read: connection refused Nov 25 10:06:51.493: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:06:51.532: INFO: Poke("udp://34.168.189.189:31768"): read udp 10.60.203.132:41701->34.168.189.189:31768: read: connection refused Nov 25 10:06:53.494: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:06:53.533: INFO: Poke("udp://34.168.189.189:31768"): read udp 10.60.203.132:56558->34.168.189.189:31768: read: connection refused Nov 25 10:06:55.493: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:06:55.532: INFO: Poke("udp://34.168.189.189:31768"): read udp 10.60.203.132:40733->34.168.189.189:31768: read: connection refused Nov 25 10:06:57.493: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:06:57.532: INFO: Poke("udp://34.168.189.189:31768"): read udp 10.60.203.132:59742->34.168.189.189:31768: read: connection refused Nov 25 10:06:59.493: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:06:59.537: INFO: Poke("udp://34.168.189.189:31768"): success STEP: creating a static load balancer IP 11/25/22 10:06:59.537 Nov 25 10:07:01.590: INFO: Allocated static load balancer IP: 34.168.152.114 STEP: changing the UDP service to type=LoadBalancer 11/25/22 10:07:01.59 STEP: demoting the static IP to ephemeral 11/25/22 10:07:01.891 STEP: waiting for the UDP service to have a load balancer 11/25/22 10:07:03.456 Nov 25 10:07:03.456: INFO: Waiting up to 15m0s for service "mutability-test" to have a LoadBalancer Nov 25 10:07:43.622: INFO: UDP load balancer: 34.82.51.223 STEP: hitting the UDP service's NodePort 11/25/22 10:07:43.622 Nov 25 10:07:43.622: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:07:43.663: INFO: Poke("udp://34.168.189.189:31768"): success STEP: hitting the UDP service's LoadBalancer 11/25/22 10:07:43.663 Nov 25 10:07:43.664: INFO: Poking udp://34.82.51.223:80 Nov 25 10:07:46.664: INFO: Poke("udp://34.82.51.223:80"): read udp 10.60.203.132:34537->34.82.51.223:80: i/o timeout Nov 25 10:07:48.665: INFO: Poking udp://34.82.51.223:80 Nov 25 10:07:48.705: INFO: Poke("udp://34.82.51.223:80"): success STEP: changing the UDP service's NodePort 11/25/22 10:07:48.706 Nov 25 10:07:48.950: INFO: UDP node port: 31769 STEP: hitting the UDP service's new NodePort 11/25/22 10:07:48.95 Nov 25 10:07:48.950: INFO: Poking udp://34.168.189.189:31769 Nov 25 10:07:48.990: INFO: Poke("udp://34.168.189.189:31769"): read udp 10.60.203.132:58283->34.168.189.189:31769: read: connection refused Nov 25 10:07:50.990: INFO: Poking udp://34.168.189.189:31769 Nov 25 10:07:51.030: INFO: Poke("udp://34.168.189.189:31769"): success STEP: checking the old UDP NodePort is closed 11/25/22 10:07:51.03 Nov 25 10:07:51.030: INFO: Poking udp://34.168.189.189:31768 Nov 25 10:07:51.069: INFO: Poke("udp://34.168.189.189:31768"): read udp 10.60.203.132:52579->34.168.189.189:31768: read: connection refused STEP: hitting the UDP service's LoadBalancer 11/25/22 10:07:51.069 Nov 25 10:07:51.069: INFO: Poking udp://34.82.51.223:80 Nov 25 10:07:51.109: INFO: Poke("udp://34.82.51.223:80"): success STEP: changing the UDP service's port 11/25/22 10:07:51.109 Nov 25 10:07:51.200: INFO: service port UDP: 81 STEP: hitting the UDP service's NodePort 11/25/22 10:07:51.2 Nov 25 10:07:51.200: INFO: Poking udp://34.168.189.189:31769 Nov 25 10:07:51.240: INFO: Poke("udp://34.168.189.189:31769"): success STEP: hitting the UDP service's LoadBalancer 11/25/22 10:07:51.24 Nov 25 10:07:51.240: INFO: Poking udp://34.82.51.223:81 Nov 25 10:07:51.280: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59019->34.82.51.223:81: read: connection refused Nov 25 10:07:53.282: INFO: Poking udp://34.82.51.223:81 Nov 25 10:07:53.322: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:42382->34.82.51.223:81: read: connection refused Nov 25 10:07:55.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:07:55.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:49436->34.82.51.223:81: read: connection refused Nov 25 10:07:57.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:07:57.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52831->34.82.51.223:81: read: connection refused Nov 25 10:07:59.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:07:59.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:36652->34.82.51.223:81: read: connection refused Nov 25 10:08:01.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:01.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:39903->34.82.51.223:81: read: connection refused Nov 25 10:08:03.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:03.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:55935->34.82.51.223:81: read: connection refused Nov 25 10:08:05.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:05.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:40033->34.82.51.223:81: read: connection refused Nov 25 10:08:07.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:07.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:56534->34.82.51.223:81: read: connection refused Nov 25 10:08:09.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:09.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:39169->34.82.51.223:81: read: connection refused Nov 25 10:08:11.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:11.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:40123->34.82.51.223:81: read: connection refused Nov 25 10:08:13.282: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:13.322: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:35554->34.82.51.223:81: read: connection refused Nov 25 10:08:15.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:15.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:39349->34.82.51.223:81: read: connection refused Nov 25 10:08:17.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:17.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:49086->34.82.51.223:81: read: connection refused Nov 25 10:08:19.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:19.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:46603->34.82.51.223:81: read: connection refused Nov 25 10:08:21.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:21.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:42018->34.82.51.223:81: read: connection refused Nov 25 10:08:23.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:23.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:57226->34.82.51.223:81: read: connection refused Nov 25 10:08:25.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:25.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:41882->34.82.51.223:81: read: connection refused Nov 25 10:08:27.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:27.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:34262->34.82.51.223:81: read: connection refused Nov 25 10:08:29.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:29.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:37292->34.82.51.223:81: read: connection refused Nov 25 10:08:31.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:31.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:53413->34.82.51.223:81: read: connection refused Nov 25 10:08:33.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:33.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:58214->34.82.51.223:81: read: connection refused Nov 25 10:08:35.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:35.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:57991->34.82.51.223:81: read: connection refused Nov 25 10:08:37.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:37.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:48076->34.82.51.223:81: read: connection refused Nov 25 10:08:39.282: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:39.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:55058->34.82.51.223:81: read: connection refused Nov 25 10:08:41.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:41.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:44546->34.82.51.223:81: read: connection refused Nov 25 10:08:43.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:43.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:44106->34.82.51.223:81: read: connection refused Nov 25 10:08:45.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:45.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:38744->34.82.51.223:81: read: connection refused Nov 25 10:08:47.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:47.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:51618->34.82.51.223:81: read: connection refused Nov 25 10:08:49.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:49.322: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:35031->34.82.51.223:81: read: connection refused Nov 25 10:08:51.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:51.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:49945->34.82.51.223:81: read: connection refused Nov 25 10:08:53.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:53.321: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:54984->34.82.51.223:81: read: connection refused Nov 25 10:08:55.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:55.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:35337->34.82.51.223:81: read: connection refused Nov 25 10:08:57.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:57.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:39162->34.82.51.223:81: read: connection refused Nov 25 10:08:59.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:08:59.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:45092->34.82.51.223:81: read: connection refused Nov 25 10:09:01.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:01.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:57289->34.82.51.223:81: read: connection refused Nov 25 10:09:03.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:03.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:51363->34.82.51.223:81: read: connection refused Nov 25 10:09:05.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:05.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59266->34.82.51.223:81: read: connection refused Nov 25 10:09:07.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:07.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:36337->34.82.51.223:81: read: connection refused Nov 25 10:09:09.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:09.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:38243->34.82.51.223:81: read: connection refused Nov 25 10:09:11.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:11.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:42373->34.82.51.223:81: read: connection refused Nov 25 10:09:13.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:13.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52785->34.82.51.223:81: read: connection refused Nov 25 10:09:15.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:15.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:33411->34.82.51.223:81: read: connection refused Nov 25 10:09:17.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:17.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:57621->34.82.51.223:81: read: connection refused Nov 25 10:09:19.282: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:19.322: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:45430->34.82.51.223:81: read: connection refused Nov 25 10:09:21.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:21.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:41852->34.82.51.223:81: read: connection refused Nov 25 10:09:23.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:23.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:40198->34.82.51.223:81: read: connection refused Nov 25 10:09:25.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:25.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:49081->34.82.51.223:81: read: connection refused Nov 25 10:09:27.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:27.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:41255->34.82.51.223:81: read: connection refused Nov 25 10:09:29.282: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:29.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59041->34.82.51.223:81: read: connection refused Nov 25 10:09:31.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:31.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:36586->34.82.51.223:81: read: connection refused Nov 25 10:09:33.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:33.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:42016->34.82.51.223:81: read: connection refused Nov 25 10:09:35.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:35.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:55004->34.82.51.223:81: read: connection refused Nov 25 10:09:37.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:37.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:43489->34.82.51.223:81: read: connection refused Nov 25 10:09:39.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:39.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59914->34.82.51.223:81: read: connection refused Nov 25 10:09:41.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:41.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:56988->34.82.51.223:81: read: connection refused Nov 25 10:09:43.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:43.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:45051->34.82.51.223:81: read: connection refused Nov 25 10:09:45.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:45.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:57505->34.82.51.223:81: read: connection refused Nov 25 10:09:47.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:47.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:38109->34.82.51.223:81: read: connection refused Nov 25 10:09:49.282: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:49.321: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:60294->34.82.51.223:81: read: connection refused Nov 25 10:09:51.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:51.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52794->34.82.51.223:81: read: connection refused Nov 25 10:09:53.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:53.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:34403->34.82.51.223:81: read: connection refused Nov 25 10:09:55.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:55.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:50470->34.82.51.223:81: read: connection refused Nov 25 10:09:57.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:57.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52722->34.82.51.223:81: read: connection refused Nov 25 10:09:59.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:09:59.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:40360->34.82.51.223:81: read: connection refused Nov 25 10:10:01.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:01.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:47485->34.82.51.223:81: read: connection refused Nov 25 10:10:03.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:03.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59340->34.82.51.223:81: read: connection refused Nov 25 10:10:05.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:05.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:56755->34.82.51.223:81: read: connection refused Nov 25 10:10:07.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:07.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:48594->34.82.51.223:81: read: connection refused Nov 25 10:10:09.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:09.337: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:54800->34.82.51.223:81: read: connection refused Nov 25 10:10:11.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:11.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:49722->34.82.51.223:81: read: connection refused Nov 25 10:10:13.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:13.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:46040->34.82.51.223:81: read: connection refused Nov 25 10:10:15.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:15.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59712->34.82.51.223:81: read: connection refused Nov 25 10:10:17.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:17.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:37274->34.82.51.223:81: read: connection refused Nov 25 10:10:19.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:19.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:39844->34.82.51.223:81: read: connection refused Nov 25 10:10:21.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:21.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:36065->34.82.51.223:81: read: connection refused Nov 25 10:10:23.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:23.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:60848->34.82.51.223:81: read: connection refused Nov 25 10:10:25.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:25.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:44365->34.82.51.223:81: read: connection refused Nov 25 10:10:27.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:27.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:53565->34.82.51.223:81: read: connection refused Nov 25 10:10:29.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:29.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59527->34.82.51.223:81: read: connection refused Nov 25 10:10:31.283: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:31.322: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:55588->34.82.51.223:81: read: connection refused Nov 25 10:10:33.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:33.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:43177->34.82.51.223:81: read: connection refused Nov 25 10:10:35.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:35.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:34643->34.82.51.223:81: read: connection refused Nov 25 10:10:37.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:37.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:35416->34.82.51.223:81: read: connection refused Nov 25 10:10:39.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:39.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59388->34.82.51.223:81: read: connection refused Nov 25 10:10:41.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:41.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:44618->34.82.51.223:81: read: connection refused Nov 25 10:10:43.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:43.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:43789->34.82.51.223:81: read: connection refused Nov 25 10:10:45.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:45.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:60579->34.82.51.223:81: read: connection refused Nov 25 10:10:47.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:47.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:35805->34.82.51.223:81: read: connection refused Nov 25 10:10:49.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:49.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:58813->34.82.51.223:81: read: connection refused Nov 25 10:10:51.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:51.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:33217->34.82.51.223:81: read: connection refused Nov 25 10:10:53.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:53.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:45388->34.82.51.223:81: read: connection refused Nov 25 10:10:55.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:55.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:38354->34.82.51.223:81: read: connection refused Nov 25 10:10:57.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:57.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59970->34.82.51.223:81: read: connection refused Nov 25 10:10:59.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:10:59.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:57756->34.82.51.223:81: read: connection refused Nov 25 10:11:01.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:01.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:36601->34.82.51.223:81: read: connection refused Nov 25 10:11:03.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:03.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:56949->34.82.51.223:81: read: connection refused Nov 25 10:11:05.282: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:05.321: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:46115->34.82.51.223:81: read: connection refused Nov 25 10:11:07.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:07.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:33109->34.82.51.223:81: read: connection refused Nov 25 10:11:09.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:09.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:47716->34.82.51.223:81: read: connection refused Nov 25 10:11:11.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:11.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:33959->34.82.51.223:81: read: connection refused Nov 25 10:11:13.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:13.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:53998->34.82.51.223:81: read: connection refused Nov 25 10:11:15.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:15.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:54836->34.82.51.223:81: read: connection refused Nov 25 10:11:17.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:17.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:47726->34.82.51.223:81: read: connection refused Nov 25 10:11:19.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:19.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52092->34.82.51.223:81: read: connection refused Nov 25 10:11:21.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:21.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:47303->34.82.51.223:81: read: connection refused Nov 25 10:11:23.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:23.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:57699->34.82.51.223:81: read: connection refused Nov 25 10:11:25.282: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:25.321: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:33334->34.82.51.223:81: read: connection refused Nov 25 10:11:27.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:27.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:60317->34.82.51.223:81: read: connection refused Nov 25 10:11:29.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:29.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:48193->34.82.51.223:81: read: connection refused Nov 25 10:11:31.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:31.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:58407->34.82.51.223:81: read: connection refused ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 5m0.547s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 3m41.472s) test/e2e/network/loadbalancer.go:443 Spec Goroutine goroutine 1971 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc003b124b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc000206c40?, 0xc003505cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0044f3f98?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0038f2670, 0xc}, 0x51, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:444 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000fb1800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:11:33.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:33.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:37943->34.82.51.223:81: read: connection refused Nov 25 10:11:35.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:35.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:49565->34.82.51.223:81: read: connection refused Nov 25 10:11:37.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:37.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:53225->34.82.51.223:81: read: connection refused Nov 25 10:11:39.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:39.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52689->34.82.51.223:81: read: connection refused Nov 25 10:11:41.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:41.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52114->34.82.51.223:81: read: connection refused Nov 25 10:11:43.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:43.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52975->34.82.51.223:81: read: connection refused Nov 25 10:11:45.282: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:45.321: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:45523->34.82.51.223:81: read: connection refused Nov 25 10:11:47.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:47.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52119->34.82.51.223:81: read: connection refused Nov 25 10:11:49.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:49.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:34827->34.82.51.223:81: read: connection refused Nov 25 10:11:51.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:51.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:55316->34.82.51.223:81: read: connection refused ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 5m20.549s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m20.003s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 4m1.475s) test/e2e/network/loadbalancer.go:443 Spec Goroutine goroutine 1971 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc003b124b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc000206c40?, 0xc003505cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0044f3f98?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0038f2670, 0xc}, 0x51, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:444 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000fb1800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:11:53.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:53.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:36671->34.82.51.223:81: read: connection refused Nov 25 10:11:55.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:55.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:35870->34.82.51.223:81: read: connection refused Nov 25 10:11:57.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:57.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:49586->34.82.51.223:81: read: connection refused Nov 25 10:11:59.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:11:59.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:42527->34.82.51.223:81: read: connection refused Nov 25 10:12:01.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:01.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:56788->34.82.51.223:81: read: connection refused Nov 25 10:12:03.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:03.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:58135->34.82.51.223:81: read: connection refused Nov 25 10:12:05.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:05.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:36213->34.82.51.223:81: read: connection refused Nov 25 10:12:07.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:07.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:49404->34.82.51.223:81: read: connection refused Nov 25 10:12:09.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:09.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52826->34.82.51.223:81: read: connection refused Nov 25 10:12:11.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:11.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:45575->34.82.51.223:81: read: connection refused ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 5m40.552s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m40.006s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 4m21.478s) test/e2e/network/loadbalancer.go:443 Spec Goroutine goroutine 1971 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc003b124b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc000206c40?, 0xc003505cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0044f3f98?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0038f2670, 0xc}, 0x51, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:444 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000fb1800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:12:13.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:13.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:39356->34.82.51.223:81: read: connection refused Nov 25 10:12:15.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:15.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:39785->34.82.51.223:81: read: connection refused Nov 25 10:12:17.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:17.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:44013->34.82.51.223:81: read: connection refused Nov 25 10:12:19.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:19.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:50035->34.82.51.223:81: read: connection refused Nov 25 10:12:21.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:21.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:55933->34.82.51.223:81: read: connection refused Nov 25 10:12:23.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:23.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:47796->34.82.51.223:81: read: connection refused Nov 25 10:12:25.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:25.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:34392->34.82.51.223:81: read: connection refused Nov 25 10:12:27.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:27.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:50458->34.82.51.223:81: read: connection refused Nov 25 10:12:29.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:29.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:50680->34.82.51.223:81: read: connection refused Nov 25 10:12:31.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:31.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:53600->34.82.51.223:81: read: connection refused ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m0.555s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m0.009s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 4m41.481s) test/e2e/network/loadbalancer.go:443 Spec Goroutine goroutine 1971 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc003b124b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc000206c40?, 0xc003505cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0044f3f98?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0038f2670, 0xc}, 0x51, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:444 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000fb1800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:12:33.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:33.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52294->34.82.51.223:81: read: connection refused Nov 25 10:12:35.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:35.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:36024->34.82.51.223:81: read: connection refused Nov 25 10:12:37.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:37.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:35628->34.82.51.223:81: read: connection refused Nov 25 10:12:39.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:39.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:55644->34.82.51.223:81: read: connection refused Nov 25 10:12:41.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:41.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:50420->34.82.51.223:81: read: connection refused Nov 25 10:12:43.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:43.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:48713->34.82.51.223:81: read: connection refused Nov 25 10:12:45.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:45.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:36905->34.82.51.223:81: read: connection refused Nov 25 10:12:47.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:47.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52252->34.82.51.223:81: read: connection refused Nov 25 10:12:49.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:49.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:46369->34.82.51.223:81: read: connection refused Nov 25 10:12:51.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:51.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:32838->34.82.51.223:81: read: connection refused ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m20.558s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m20.012s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 5m1.483s) test/e2e/network/loadbalancer.go:443 Spec Goroutine goroutine 1971 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc003b124b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc000206c40?, 0xc003505cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0044f3f98?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0038f2670, 0xc}, 0x51, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:444 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000fb1800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:12:53.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:53.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:39891->34.82.51.223:81: read: connection refused Nov 25 10:12:55.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:55.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:40555->34.82.51.223:81: read: connection refused Nov 25 10:12:57.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:57.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:44460->34.82.51.223:81: read: connection refused Nov 25 10:12:59.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:12:59.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59070->34.82.51.223:81: read: connection refused Nov 25 10:13:01.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:01.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:39179->34.82.51.223:81: read: connection refused Nov 25 10:13:03.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:03.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:51350->34.82.51.223:81: read: connection refused Nov 25 10:13:05.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:05.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:34957->34.82.51.223:81: read: connection refused Nov 25 10:13:07.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:07.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:51985->34.82.51.223:81: read: connection refused Nov 25 10:13:09.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:09.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:49439->34.82.51.223:81: read: connection refused Nov 25 10:13:11.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:11.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:40705->34.82.51.223:81: read: connection refused ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m40.56s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m40.014s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 5m21.485s) test/e2e/network/loadbalancer.go:443 Spec Goroutine goroutine 1971 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc003b124b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc000206c40?, 0xc003505cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0044f3f98?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0038f2670, 0xc}, 0x51, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:444 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000fb1800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:13:13.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:13.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:55090->34.82.51.223:81: read: connection refused Nov 25 10:13:15.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:15.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:39954->34.82.51.223:81: read: connection refused Nov 25 10:13:17.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:17.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:56708->34.82.51.223:81: read: connection refused Nov 25 10:13:19.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:19.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:42793->34.82.51.223:81: read: connection refused Nov 25 10:13:21.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:21.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:34058->34.82.51.223:81: read: connection refused Nov 25 10:13:23.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:23.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:47695->34.82.51.223:81: read: connection refused Nov 25 10:13:25.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:25.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59460->34.82.51.223:81: read: connection refused Nov 25 10:13:27.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:27.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:52849->34.82.51.223:81: read: connection refused Nov 25 10:13:29.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:29.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:42645->34.82.51.223:81: read: connection refused Nov 25 10:13:31.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:31.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:43190->34.82.51.223:81: read: connection refused ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 7m0.562s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 7m0.016s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 5m41.487s) test/e2e/network/loadbalancer.go:443 Spec Goroutine goroutine 1971 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc003b124b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc000206c40?, 0xc003505cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0044f3f98?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0038f2670, 0xc}, 0x51, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:444 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000fb1800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:13:33.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:33.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:34149->34.82.51.223:81: read: connection refused Nov 25 10:13:35.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:35.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:53065->34.82.51.223:81: read: connection refused Nov 25 10:13:37.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:37.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:50154->34.82.51.223:81: read: connection refused Nov 25 10:13:39.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:39.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:43499->34.82.51.223:81: read: connection refused Nov 25 10:13:41.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:41.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:46696->34.82.51.223:81: read: connection refused Nov 25 10:13:43.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:43.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:42291->34.82.51.223:81: read: connection refused Nov 25 10:13:45.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:45.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:60208->34.82.51.223:81: read: connection refused Nov 25 10:13:47.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:47.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:46564->34.82.51.223:81: read: connection refused Nov 25 10:13:49.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:49.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:45704->34.82.51.223:81: read: connection refused Nov 25 10:13:51.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:51.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:48692->34.82.51.223:81: read: connection refused ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 7m20.563s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 7m20.017s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 6m1.489s) test/e2e/network/loadbalancer.go:443 Spec Goroutine goroutine 1971 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc003b124b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc000206c40?, 0xc003505cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0044f3f98?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0038f2670, 0xc}, 0x51, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:444 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000fb1800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:13:53.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:53.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:39669->34.82.51.223:81: read: connection refused Nov 25 10:13:55.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:55.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:56245->34.82.51.223:81: read: connection refused Nov 25 10:13:57.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:57.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:55525->34.82.51.223:81: read: connection refused Nov 25 10:13:59.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:13:59.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:51166->34.82.51.223:81: read: connection refused Nov 25 10:14:01.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:01.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:41872->34.82.51.223:81: read: connection refused Nov 25 10:14:03.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:03.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:56015->34.82.51.223:81: read: connection refused Nov 25 10:14:05.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:05.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:44985->34.82.51.223:81: read: connection refused Nov 25 10:14:07.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:07.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:49887->34.82.51.223:81: read: connection refused Nov 25 10:14:09.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:09.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:34921->34.82.51.223:81: read: connection refused Nov 25 10:14:11.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:11.322: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:35192->34.82.51.223:81: read: connection refused ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 7m40.565s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 7m40.019s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 6m21.491s) test/e2e/network/loadbalancer.go:443 Spec Goroutine goroutine 1971 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc003b124b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc000206c40?, 0xc003505cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0044f3f98?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0038f2670, 0xc}, 0x51, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:444 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000fb1800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:14:13.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:13.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59128->34.82.51.223:81: read: connection refused Nov 25 10:14:15.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:15.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:51482->34.82.51.223:81: read: connection refused Nov 25 10:14:17.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:17.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:34546->34.82.51.223:81: read: connection refused Nov 25 10:14:19.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:19.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:33300->34.82.51.223:81: read: connection refused Nov 25 10:14:21.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:21.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:34739->34.82.51.223:81: read: connection refused Nov 25 10:14:23.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:23.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:33378->34.82.51.223:81: read: connection refused Nov 25 10:14:25.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:25.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:35077->34.82.51.223:81: read: connection refused Nov 25 10:14:27.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:27.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:48907->34.82.51.223:81: read: connection refused Nov 25 10:14:29.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:29.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:51145->34.82.51.223:81: read: connection refused Nov 25 10:14:31.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:31.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:56764->34.82.51.223:81: read: connection refused ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 8m0.569s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 8m0.023s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 6m41.495s) test/e2e/network/loadbalancer.go:443 Spec Goroutine goroutine 1971 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc003b124b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc000206c40?, 0xc003505cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0044f3f98?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0038f2670, 0xc}, 0x51, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:444 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000fb1800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:14:33.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:33.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:34392->34.82.51.223:81: read: connection refused Nov 25 10:14:35.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:35.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:36128->34.82.51.223:81: read: connection refused Nov 25 10:14:37.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:37.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:38098->34.82.51.223:81: read: connection refused Nov 25 10:14:39.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:39.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:49673->34.82.51.223:81: read: connection refused Nov 25 10:14:41.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:41.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:53479->34.82.51.223:81: read: connection refused Nov 25 10:14:43.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:43.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:42310->34.82.51.223:81: read: connection refused Nov 25 10:14:45.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:45.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:55140->34.82.51.223:81: read: connection refused Nov 25 10:14:47.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:47.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:54910->34.82.51.223:81: read: connection refused Nov 25 10:14:49.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:49.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:59769->34.82.51.223:81: read: connection refused Nov 25 10:14:51.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:51.319: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:32982->34.82.51.223:81: read: connection refused ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 8m20.572s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 8m20.025s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 7m1.497s) test/e2e/network/loadbalancer.go:443 Spec Goroutine goroutine 1971 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc003b124b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc000206c40?, 0xc003505cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0044f3f98?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0038f2670, 0xc}, 0x51, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:444 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000fb1800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 10:14:53.281: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:53.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:33855->34.82.51.223:81: read: connection refused Nov 25 10:14:55.280: INFO: Poking udp://34.82.51.223:81 Nov 25 10:14:55.320: INFO: Poke("udp://34.82.51.223:81"): read udp 10.60.203.132:56793-