go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sServers\swith\ssupport\sfor\sAPI\schunking\sshould\ssupport\scontinue\slisting\sfrom\sthe\slast\skey\sif\sthe\soriginal\sversion\shas\sbeen\scompacted\saway\,\sthough\sthe\slist\sis\sinconsistent\s\[Slow\]$'
test/e2e/apimachinery/chunking.go:177 k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc There were additional failures detected after the initial failure: [FAILED] Nov 25 18:04:02.046: failed to list events in namespace "chunking-5595": Get "https://35.233.152.153/api/v1/namespaces/chunking-5595/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:04:02.086: Couldn't delete ns: "chunking-5595": Delete "https://35.233.152.153/api/v1/namespaces/chunking-5595": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/chunking-5595", Err:(*net.OpError)(0xc0016e02d0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-api-machinery] Servers with support for API chunking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:02:43.847 Nov 25 18:02:43.847: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename chunking 11/25/22 18:02:43.849 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:02:44.068 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:02:44.173 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/apimachinery/chunking.go:51 STEP: creating a large number of resources 11/25/22 18:02:44.27 [It] should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] test/e2e/apimachinery/chunking.go:126 STEP: retrieving the first page 11/25/22 18:03:01.832 Nov 25 18:03:01.885: INFO: Retrieved 40/40 results with rv 4973 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NDk3Mywic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 STEP: retrieving the second page until the token expires 11/25/22 18:03:01.885 Nov 25 18:03:21.963: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NDk3Mywic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 18:03:41.936: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NDk3Mywic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet STEP: retrieving the second page again with the token received with the error message 11/25/22 18:04:01.926 Nov 25 18:04:01.966: INFO: Unexpected error: failed to list pod templates in namespace: chunking-5595, given inconsistent continue token and limit: 40: <*url.Error | 0xc003130000>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/chunking-5595/podtemplates?limit=40", Err: <*net.OpError | 0xc0046782d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0044e6690>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0006fc120>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:04:01.966: FAIL: failed to list pod templates in namespace: chunking-5595, given inconsistent continue token and limit: 40: Get "https://35.233.152.153/api/v1/namespaces/chunking-5595/podtemplates?limit=40": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc [AfterEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/node/init/init.go:32 Nov 25 18:04:01.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:04:02.006 STEP: Collecting events from namespace "chunking-5595". 11/25/22 18:04:02.006 Nov 25 18:04:02.046: INFO: Unexpected error: failed to list events in namespace "chunking-5595": <*url.Error | 0xc0031304b0>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/chunking-5595/events", Err: <*net.OpError | 0xc004678500>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0039c0960>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0006fcc20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:04:02.046: FAIL: failed to list events in namespace "chunking-5595": Get "https://35.233.152.153/api/v1/namespaces/chunking-5595/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc003d605c0, {0xc002a79e60, 0xd}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002d50340}, {0xc002a79e60, 0xd}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc003d60650?, {0xc002a79e60?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00114fd10) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0012a9fd0?, 0xc0000cdfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc004270228?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0012a9fd0?, 0x29449fc?}, {0xae73300?, 0xc0000cdf80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking tear down framework | framework.go:193 STEP: Destroying namespace "chunking-5595" for this suite. 11/25/22 18:04:02.046 Nov 25 18:04:02.086: FAIL: Couldn't delete ns: "chunking-5595": Delete "https://35.233.152.153/api/v1/namespaces/chunking-5595": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/chunking-5595", Err:(*net.OpError)(0xc0016e02d0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00114fd10) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0012a9f00?, 0xc0022f2fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0012a9f00?, 0x0?}, {0xae73300?, 0x5?, 0xc004342210?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\sjobs\swhen\ssuspended\s\[Slow\]\s\[Conformance\]$'
test/e2e/apps/cronjob.go:111 k8s.io/kubernetes/test/e2e/apps.glob..func2.2() test/e2e/apps/cronjob.go:111 +0x376 There were additional failures detected after the initial failure: [FAILED] Nov 25 18:04:00.576: failed to list events in namespace "cronjob-9393": Get "https://35.233.152.153/api/v1/namespaces/cronjob-9393/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:04:00.616: Couldn't delete ns: "cronjob-9393": Delete "https://35.233.152.153/api/v1/namespaces/cronjob-9393": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/cronjob-9393", Err:(*net.OpError)(0xc00504b770)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 17:57:59.185 Nov 25 17:57:59.185: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/25/22 17:57:59.187 Nov 25 17:57:59.227: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:01.266: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:03.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:05.266: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:07.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:09.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:11.266: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:13.266: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:15.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:17.266: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:19.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:21.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:23.266: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 17:59:05.787 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 17:59:05.876 [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/apps/cronjob.go:96 STEP: Creating a suspended cronjob 11/25/22 17:59:06.35 STEP: Ensuring no jobs are scheduled 11/25/22 17:59:06.416 STEP: Ensuring no job exists by listing jobs explicitly 11/25/22 18:04:00.456 Nov 25 18:04:00.496: INFO: Unexpected error: Failed to list the CronJobs in namespace cronjob-9393: <*url.Error | 0xc003a32240>: { Op: "Get", URL: "https://35.233.152.153/apis/batch/v1/namespaces/cronjob-9393/jobs", Err: <*net.OpError | 0xc003bf42d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004f494d0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004e40e20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:04:00.496: FAIL: Failed to list the CronJobs in namespace cronjob-9393: Get "https://35.233.152.153/apis/batch/v1/namespaces/cronjob-9393/jobs": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func2.2() test/e2e/apps/cronjob.go:111 +0x376 [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 25 18:04:00.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:04:00.536 STEP: Collecting events from namespace "cronjob-9393". 11/25/22 18:04:00.536 Nov 25 18:04:00.576: INFO: Unexpected error: failed to list events in namespace "cronjob-9393": <*url.Error | 0xc003a32720>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/cronjob-9393/events", Err: <*net.OpError | 0xc003bf4640>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005026ae0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004e411a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:04:00.576: FAIL: failed to list events in namespace "cronjob-9393": Get "https://35.233.152.153/api/v1/namespaces/cronjob-9393/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0017c25c0, {0xc003902070, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc001cf76c0}, {0xc003902070, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0017c2650?, {0xc003902070?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc001087860) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc002ad8e70?, 0xc004b84fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc004e3e228?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc002ad8e70?, 0x29449fc?}, {0xae73300?, 0xc004b84f80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-9393" for this suite. 11/25/22 18:04:00.576 Nov 25 18:04:00.616: FAIL: Couldn't delete ns: "cronjob-9393": Delete "https://35.233.152.153/api/v1/namespaces/cronjob-9393": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/cronjob-9393", Err:(*net.OpError)(0xc00504b770)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001087860) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc002ad8df0?, 0xc00167bfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc002ad8df0?, 0x0?}, {0xae73300?, 0x5?, 0xc00517f248?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\snew\sjobs\swhen\sForbidConcurrent\s\[Slow\]\s\[Conformance\]$'
test/e2e/apps/cronjob.go:152 k8s.io/kubernetes/test/e2e/apps.glob..func2.3() test/e2e/apps/cronjob.go:152 +0xa3c There were additional failures detected after the initial failure: [FAILED] Nov 25 18:12:59.518: failed to list events in namespace "cronjob-2141": Get "https://35.233.152.153/api/v1/namespaces/cronjob-2141/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:12:59.559: Couldn't delete ns: "cronjob-2141": Delete "https://35.233.152.153/api/v1/namespaces/cronjob-2141": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/cronjob-2141", Err:(*net.OpError)(0xc004306280)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:07:08.336 Nov 25 18:07:08.336: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/25/22 18:07:08.338 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:07:08.61 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:07:08.725 [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] test/e2e/apps/cronjob.go:124 STEP: Creating a ForbidConcurrent cronjob 11/25/22 18:07:08.821 STEP: Ensuring a job is scheduled 11/25/22 18:07:09.204 STEP: Ensuring exactly one is scheduled 11/25/22 18:08:01.254 STEP: Ensuring exactly one running job exists by listing jobs explicitly 11/25/22 18:08:01.309 STEP: Ensuring no more jobs are scheduled 11/25/22 18:08:01.359 ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] (Spec Runtime: 5m0.486s) test/e2e/apps/cronjob.go:124 In [It] (Node Runtime: 5m0s) test/e2e/apps/cronjob.go:124 At [By Step] Ensuring no more jobs are scheduled (Step Runtime: 4m7.463s) test/e2e/apps/cronjob.go:146 Spec Goroutine goroutine 2167 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f48930, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x766e270?, 0xc000a47d60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x0?, 0xc0d85f0055695f5d?, 0xa2b71baea2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/apps.waitForActiveJobs({0x801de88?, 0xc0009d4b60}, {0xc002350940, 0xc}, {0xc002568f00, 0x6}, 0x2) test/e2e/apps/cronjob.go:593 > k8s.io/kubernetes/test/e2e/apps.glob..func2.3() test/e2e/apps/cronjob.go:147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0010fbb00, 0xc0003a7c20}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] (Spec Runtime: 5m20.487s) test/e2e/apps/cronjob.go:124 In [It] (Node Runtime: 5m20.002s) test/e2e/apps/cronjob.go:124 At [By Step] Ensuring no more jobs are scheduled (Step Runtime: 4m27.465s) test/e2e/apps/cronjob.go:146 Spec Goroutine goroutine 2167 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f48930, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x766e270?, 0xc000a47d60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x0?, 0xc0d85f0055695f5d?, 0xa2b71baea2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/apps.waitForActiveJobs({0x801de88?, 0xc0009d4b60}, {0xc002350940, 0xc}, {0xc002568f00, 0x6}, 0x2) test/e2e/apps/cronjob.go:593 > k8s.io/kubernetes/test/e2e/apps.glob..func2.3() test/e2e/apps/cronjob.go:147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0010fbb00, 0xc0003a7c20}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] (Spec Runtime: 5m40.49s) test/e2e/apps/cronjob.go:124 In [It] (Node Runtime: 5m40.004s) test/e2e/apps/cronjob.go:124 At [By Step] Ensuring no more jobs are scheduled (Step Runtime: 4m47.467s) test/e2e/apps/cronjob.go:146 Spec Goroutine goroutine 2167 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f48930, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x766e270?, 0xc000a47d60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x0?, 0xc0d85f0055695f5d?, 0xa2b71baea2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/apps.waitForActiveJobs({0x801de88?, 0xc0009d4b60}, {0xc002350940, 0xc}, {0xc002568f00, 0x6}, 0x2) test/e2e/apps/cronjob.go:593 > k8s.io/kubernetes/test/e2e/apps.glob..func2.3() test/e2e/apps/cronjob.go:147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0010fbb00, 0xc0003a7c20}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ STEP: Removing cronjob 11/25/22 18:12:59.399 Nov 25 18:12:59.439: INFO: Unexpected error: Failed to delete CronJob forbid in namespace cronjob-2141: <*url.Error | 0xc0050ad7a0>: { Op: "Delete", URL: "https://35.233.152.153/apis/batch/v1/namespaces/cronjob-2141/cronjobs/forbid", Err: <*net.OpError | 0xc004306050>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004519e90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004338d80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:12:59.439: FAIL: Failed to delete CronJob forbid in namespace cronjob-2141: Delete "https://35.233.152.153/apis/batch/v1/namespaces/cronjob-2141/cronjobs/forbid": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func2.3() test/e2e/apps/cronjob.go:152 +0xa3c [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 25 18:12:59.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:12:59.479 STEP: Collecting events from namespace "cronjob-2141". 11/25/22 18:12:59.479 Nov 25 18:12:59.518: INFO: Unexpected error: failed to list events in namespace "cronjob-2141": <*url.Error | 0xc004363b00>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/cronjob-2141/events", Err: <*net.OpError | 0xc0032d4c30>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0043c84b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000ad14a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:12:59.518: FAIL: failed to list events in namespace "cronjob-2141": Get "https://35.233.152.153/api/v1/namespaces/cronjob-2141/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc001f2a5c0, {0xc002350940, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0009d4b60}, {0xc002350940, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001f2a650?, {0xc002350940?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00114f950) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0018e5660?, 0xc0022f3fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00049a088?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0018e5660?, 0x29449fc?}, {0xae73300?, 0xc0022f3f80?, 0x3a212e4?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-2141" for this suite. 11/25/22 18:12:59.519 Nov 25 18:12:59.559: FAIL: Couldn't delete ns: "cronjob-2141": Delete "https://35.233.152.153/api/v1/namespaces/cronjob-2141": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/cronjob-2141", Err:(*net.OpError)(0xc004306280)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00114f950) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0018e55e0?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0018e55e0?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0010c81e0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:13:00.059 Nov 25 18:13:00.059: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/25/22 18:13:00.06 Nov 25 18:13:00.099: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:02.139: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:04.139: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:06.138: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:08.141: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:10.139: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:12.139: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:14.139: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:16.139: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:18.139: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:20.138: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:22.138: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:24.140: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:26.139: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:28.139: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:30.139: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:30.178: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:30.178: INFO: Unexpected error: <*errors.errorString | 0xc000287c60>: { s: "timed out waiting for the condition", } Nov 25 18:13:30.178: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0010c81e0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 25 18:13:30.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:13:30.218 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00116a1e0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:13:02.672 Nov 25 18:13:02.672: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/25/22 18:13:02.673 Nov 25 18:13:02.713: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:04.753: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:06.753: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:08.753: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:10.752: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:12.752: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:14.753: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:16.753: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:18.752: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:20.752: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:22.752: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:24.752: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:26.753: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:28.752: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:30.753: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:32.752: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:32.792: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:32.792: INFO: Unexpected error: <*errors.errorString | 0xc0001fda10>: { s: "timed out waiting for the condition", } Nov 25 18:13:32.792: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00116a1e0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 25 18:13:32.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:13:32.831 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\ssupport\sInClusterConfig\swith\stoken\srotation\s\[Slow\]$'
test/e2e/auth/service_accounts.go:520 k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9ab There were additional failures detected after the initial failure: [FAILED] Nov 25 18:22:30.977: failed to list events in namespace "svcaccounts-8730": Get "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:22:31.018: Couldn't delete ns: "svcaccounts-8730": Delete "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/svcaccounts-8730", Err:(*net.OpError)(0xc001f462d0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:02:26.089 Nov 25 18:02:26.089: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename svcaccounts 11/25/22 18:02:26.09 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:02:26.297 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:02:26.498 [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 [It] should support InClusterConfig with token rotation [Slow] test/e2e/auth/service_accounts.go:432 Nov 25 18:02:26.704: INFO: created pod Nov 25 18:02:26.704: INFO: Waiting up to 1m0s for 1 pods to be running and ready: [inclusterclient] Nov 25 18:02:26.704: INFO: Waiting up to 1m0s for pod "inclusterclient" in namespace "svcaccounts-8730" to be "running and ready" Nov 25 18:02:26.756: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 51.284039ms Nov 25 18:02:26.756: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:02:28.823: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118187813s Nov 25 18:02:28.823: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:02:30.816: INFO: Pod "inclusterclient": Phase="Running", Reason="", readiness=true. Elapsed: 4.111841587s Nov 25 18:02:30.816: INFO: Pod "inclusterclient" satisfied condition "running and ready" Nov 25 18:02:30.816: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [inclusterclient] Nov 25 18:02:30.816: INFO: pod is ready Nov 25 18:03:30.817: INFO: polling logs Nov 25 18:03:30.949: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Nov 25 18:04:30.817: INFO: polling logs Nov 25 18:04:30.857: INFO: Error pulling logs: Get "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:05:30.817: INFO: polling logs Nov 25 18:05:30.945: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Nov 25 18:06:30.817: INFO: polling logs Nov 25 18:06:31.064: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m0.527s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m0s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:07:30.817: INFO: polling logs Nov 25 18:07:30.977: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m20.53s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m20.003s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m40.533s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m40.006s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 6m0.534s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 6m0.007s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:08:30.817: INFO: polling logs Nov 25 18:08:30.915: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 6m20.539s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 6m20.012s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 6m40.541s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 6m40.013s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 7m0.544s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 7m0.016s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:09:30.817: INFO: polling logs Nov 25 18:09:31.098: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 7m20.545s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 7m20.018s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 7m40.551s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 7m40.023s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 8m0.558s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 8m0.03s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:10:30.817: INFO: polling logs Nov 25 18:10:30.925: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 8m20.56s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 8m20.032s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 8m40.561s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 8m40.034s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 9m0.563s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 9m0.036s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:11:30.817: INFO: polling logs Nov 25 18:11:30.935: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 9m20.566s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 9m20.038s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 9m40.568s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 9m40.04s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 10m0.569s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 10m0.042s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:12:30.817: INFO: polling logs Nov 25 18:12:31.024: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 10m20.571s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 10m20.044s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 10m40.574s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 10m40.046s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 11m0.576s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 11m0.048s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:13:30.817: INFO: polling logs Nov 25 18:13:30.856: INFO: Error pulling logs: Get "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 35.233.152.153:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 11m20.578s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 11m20.051s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 11m40.58s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 11m40.053s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 12m0.582s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 12m0.055s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:14:30.817: INFO: polling logs Nov 25 18:14:30.856: INFO: Error pulling logs: Get "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 35.233.152.153:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 12m20.584s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 12m20.057s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 12m40.587s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 12m40.059s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 13m0.589s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 13m0.062s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:15:30.817: INFO: polling logs Nov 25 18:15:30.945: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 13m20.591s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 13m20.064s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 13m40.593s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 13m40.066s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 14m0.595s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 14m0.067s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:16:30.817: INFO: polling logs Nov 25 18:16:30.896: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 14m20.596s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 14m20.069s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 14m40.599s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 14m40.072s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 15m0.601s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 15m0.073s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:17:30.817: INFO: polling logs Nov 25 18:17:30.887: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 15m20.603s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 15m20.076s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 15m40.605s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 15m40.078s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 16m0.609s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 16m0.082s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:18:30.816: INFO: polling logs Nov 25 18:18:30.856: INFO: Error pulling logs: Get "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 35.233.152.153:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 16m20.613s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 16m20.085s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 16m40.616s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 16m40.088s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 17m0.618s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 17m0.091s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:19:30.816: INFO: polling logs Nov 25 18:19:30.856: INFO: Error pulling logs: Get "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 35.233.152.153:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 17m20.619s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 17m20.092s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 17m40.622s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 17m40.094s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 18m0.624s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 18m0.097s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:20:30.817: INFO: polling logs Nov 25 18:20:30.857: INFO: Error pulling logs: Get "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 35.233.152.153:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 18m20.626s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 18m20.099s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 18m40.629s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 18m40.102s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 19m0.631s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 19m0.104s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:21:30.817: INFO: polling logs Nov 25 18:21:30.856: INFO: Error pulling logs: Get "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 35.233.152.153:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 19m20.633s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 19m20.106s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 19m40.636s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 19m40.109s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #18 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 20m0.637s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 20m0.11s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1048 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002f70a98, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004fc3e08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001ace360, 0xc000e80780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:22:30.817: INFO: polling logs Nov 25 18:22:30.857: INFO: Error pulling logs: Get "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:22:30.857: INFO: polling logs Nov 25 18:22:30.896: INFO: Error pulling logs: Get "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:22:30.896: FAIL: Unexpected error: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9ab [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 Nov 25 18:22:30.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:22:30.936 STEP: Collecting events from namespace "svcaccounts-8730". 11/25/22 18:22:30.937 Nov 25 18:22:30.977: INFO: Unexpected error: failed to list events in namespace "svcaccounts-8730": <*url.Error | 0xc0042fe300>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730/events", Err: <*net.OpError | 0xc003c82190>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001ddc540>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003d04140>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:22:30.977: FAIL: failed to list events in namespace "svcaccounts-8730": Get "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00116c5c0, {0xc004fb3f70, 0x10}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc003c4e4e0}, {0xc004fb3f70, 0x10}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00116c650?, {0xc004fb3f70?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002643c0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc004dfef80?, 0xc001793fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00136e8a8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004dfef80?, 0x29449fc?}, {0xae73300?, 0xc001793f80?, 0x2d5dcbd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 STEP: Destroying namespace "svcaccounts-8730" for this suite. 11/25/22 18:22:30.978 Nov 25 18:22:31.018: FAIL: Couldn't delete ns: "svcaccounts-8730": Delete "https://35.233.152.153/api/v1/namespaces/svcaccounts-8730": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/svcaccounts-8730", Err:(*net.OpError)(0xc001f462d0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0002643c0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc004dfef00?, 0xc002ff0fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004dfef00?, 0x0?}, {0xae73300?, 0x5?, 0xc002f700f0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swith\s\-\-leave\-stdin\-open$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0008802d0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:13:29.505 Nov 25 18:13:29.505: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/25/22 18:13:29.507 Nov 25 18:13:29.547: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:31.586: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:33.586: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:35.586: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:37.587: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:39.586: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:41.586: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:43.587: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:45.587: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:47.586: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:49.586: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:51.586: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:53.587: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:55.586: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:57.587: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:59.587: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:59.626: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:59.626: INFO: Unexpected error: <*errors.errorString | 0xc0001fda10>: { s: "timed out waiting for the condition", } Nov 25 18:13:59.626: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0008802d0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 25 18:13:59.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:13:59.667 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0004f42d0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:14:00.012 Nov 25 18:14:00.012: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/25/22 18:14:00.014 Nov 25 18:14:00.054: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:02.094: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:04.094: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:06.094: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:08.094: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:10.094: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:12.094: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:14.094: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:16.094: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:18.094: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:20.093: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:22.094: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:24.093: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:26.093: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:28.093: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:30.093: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:30.133: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:30.133: INFO: Unexpected error: <*errors.errorString | 0xc000195d80>: { s: "timed out waiting for the condition", } Nov 25 18:14:30.133: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0004f42d0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 25 18:14:30.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:14:30.173 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never\,\sbut\swith\s\-\-rm$'
test/e2e/framework/kubectl/builder.go:87 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc000e426e0?, 0x0?}, {0xc00477bea0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc00477bea0, 0xc}, {0xc004894000, 0x145}, {0xc000b69ec0?, 0x8?, 0x7f33a60cb3c8?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc004894000, 0x145}, {0xc00477bea0, 0xc}, {0xc0058ba380, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 There were additional failures detected after the initial failure: [FAILED] Nov 25 18:13:23.921: failed to list events in namespace "kubectl-3217": Get "https://35.233.152.153/api/v1/namespaces/kubectl-3217/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:13:23.961: Couldn't delete ns: "kubectl-3217": Delete "https://35.233.152.153/api/v1/namespaces/kubectl-3217": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/kubectl-3217", Err:(*net.OpError)(0xc003ea8690)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:08:07.451 Nov 25 18:08:07.451: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/25/22 18:08:07.452 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:08:07.621 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:08:07.726 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/25/22 18:08:07.817 Nov 25 18:08:07.818: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3217 create -f -' Nov 25 18:08:08.452: INFO: stderr: "" Nov 25 18:08:08.452: INFO: stdout: "pod/httpd created\n" Nov 25 18:08:08.452: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 25 18:08:08.452: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-3217" to be "running and ready" Nov 25 18:08:08.528: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 75.945174ms Nov 25 18:08:08.528: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:10.619: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167000262s Nov 25 18:08:10.619: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:12.608: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155360508s Nov 25 18:08:12.608: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:14.581: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128402557s Nov 25 18:08:14.581: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:16.627: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174828358s Nov 25 18:08:16.627: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:18.580: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.127351284s Nov 25 18:08:18.580: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:20.603: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.150919889s Nov 25 18:08:20.603: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:22.581: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.128999888s Nov 25 18:08:22.581: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:24.609: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.156889755s Nov 25 18:08:24.609: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:26.589: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.136919102s Nov 25 18:08:26.589: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:28.596: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.14407299s Nov 25 18:08:28.596: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:30.579: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.126881976s Nov 25 18:08:30.579: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:32.635: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.183214452s Nov 25 18:08:32.636: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:34.606: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.154047616s Nov 25 18:08:34.606: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:36.586: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.133581061s Nov 25 18:08:36.586: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:38.646: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.193800193s Nov 25 18:08:38.646: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:40.580: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.128142733s Nov 25 18:08:40.580: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:42.578: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.125258084s Nov 25 18:08:42.578: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:44.662: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.209473708s Nov 25 18:08:44.662: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:46.580: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 38.128214064s Nov 25 18:08:46.581: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:48.587: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 40.135051942s Nov 25 18:08:48.587: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:50.580: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 42.127847939s Nov 25 18:08:50.580: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:52.582: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 44.129384982s Nov 25 18:08:52.582: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:54.610: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 46.157473166s Nov 25 18:08:54.610: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:56.575: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 48.123030261s Nov 25 18:08:56.575: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:08:58.599: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 50.146539099s Nov 25 18:08:58.599: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:00.625: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 52.173170486s Nov 25 18:09:00.625: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:02.574: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 54.122056679s Nov 25 18:09:02.574: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:04.597: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 56.144274794s Nov 25 18:09:04.597: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:06.593: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 58.14115677s Nov 25 18:09:06.593: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:08.620: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.168179084s Nov 25 18:09:08.621: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:10.612: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.15998795s Nov 25 18:09:10.612: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:12.579: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.12656388s Nov 25 18:09:12.579: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:14.588: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.135589923s Nov 25 18:09:14.588: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:16.582: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.129492122s Nov 25 18:09:16.582: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:18.589: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.136942483s Nov 25 18:09:18.589: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:20.580: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.127881657s Nov 25 18:09:20.580: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:22.581: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.128996298s Nov 25 18:09:22.581: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:24.665: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.212617715s Nov 25 18:09:24.665: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:26.579: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.126660989s Nov 25 18:09:26.579: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:28.580: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.127481782s Nov 25 18:09:28.580: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:30.601: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.148859269s Nov 25 18:09:30.601: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:32.577: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.124870312s Nov 25 18:09:32.577: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:34.636: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.184213634s Nov 25 18:09:34.637: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:36.580: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.127292622s Nov 25 18:09:36.580: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:38.584: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.13180645s Nov 25 18:09:38.584: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:40.574: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.121581874s Nov 25 18:09:40.574: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:42.606: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.15390901s Nov 25 18:09:42.606: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:44.660: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.207993639s Nov 25 18:09:44.660: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:46.601: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.149067486s Nov 25 18:09:46.601: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:48.610: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.157470936s Nov 25 18:09:48.610: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:50.598: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.145817793s Nov 25 18:09:50.598: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:52.586: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.133999633s Nov 25 18:09:52.586: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:54.579: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.127207822s Nov 25 18:09:54.580: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:56.581: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.128354914s Nov 25 18:09:56.581: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:09:58.632: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.179408124s Nov 25 18:09:58.632: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:00.577: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.125200116s Nov 25 18:10:00.578: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:02.612: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.159592359s Nov 25 18:10:02.612: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:04.584: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.131632086s Nov 25 18:10:04.584: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:06.577: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.125149755s Nov 25 18:10:06.577: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:08.595: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.142366027s Nov 25 18:10:08.595: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:10.577: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.125180009s Nov 25 18:10:10.578: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:12.583: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.130923081s Nov 25 18:10:12.583: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:14.584: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.13130399s Nov 25 18:10:14.584: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:16.582: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.129657433s Nov 25 18:10:16.582: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:18.583: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.130341581s Nov 25 18:10:18.583: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:20.596: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.143675684s Nov 25 18:10:20.596: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:22.580: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.127668626s Nov 25 18:10:22.580: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:24.573: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.120919089s Nov 25 18:10:24.573: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:26.662: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.209286219s Nov 25 18:10:26.662: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:28.629: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.177118214s Nov 25 18:10:28.629: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:30.579: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.126980443s Nov 25 18:10:30.579: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:32.584: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.131962977s Nov 25 18:10:32.584: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:34.593: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.140417962s Nov 25 18:10:34.593: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:36.598: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.145586555s Nov 25 18:10:36.598: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:38.583: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.131070647s Nov 25 18:10:38.583: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:40.573: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.121175611s Nov 25 18:10:40.573: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:42.571: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.118582813s Nov 25 18:10:42.571: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:44.570: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.11820484s Nov 25 18:10:44.571: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:46.578: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.125624414s Nov 25 18:10:46.578: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:48.595: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.142270788s Nov 25 18:10:48.595: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:50.587: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.134547164s Nov 25 18:10:50.587: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:52.576: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.12382031s Nov 25 18:10:52.576: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 25 18:10:54.574: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.121519203s Nov 25 18:10:54.574: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:10:56.576: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.123266849s Nov 25 18:10:56.576: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:10:58.609: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.156235569s Nov 25 18:10:58.609: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:11:00.576: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.123430001s Nov 25 18:11:00.576: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:11:02.600: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.147901692s Nov 25 18:11:02.600: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:11:04.574: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.121616387s Nov 25 18:11:04.574: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:52 +0000 UTC }] Nov 25 18:11:06.572: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.11954297s Nov 25 18:11:06.572: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:52 +0000 UTC }] Nov 25 18:11:08.574: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.121372877s Nov 25 18:11:08.574: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:52 +0000 UTC }] Nov 25 18:11:10.584: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.13215096s Nov 25 18:11:10.585: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:52 +0000 UTC }] Nov 25 18:11:12.575: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.122337285s Nov 25 18:11:12.575: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:52 +0000 UTC }] Nov 25 18:11:14.570: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.118200339s Nov 25 18:11:14.571: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:52 +0000 UTC }] Nov 25 18:11:16.572: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.119541097s Nov 25 18:11:16.572: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:53 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:10:52 +0000 UTC }] Nov 25 18:11:18.779: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 3m10.326301954s Nov 25 18:11:18.779: INFO: Pod "httpd" satisfied condition "running and ready" Nov 25 18:11:18.779: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] [Slow] running a failing command without --restart=Never, but with --rm test/e2e/kubectl/kubectl.go:571 Nov 25 18:11:18.779: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3217 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=OnFailure --rm --pod-running-timeout=2m0s failure-3 -- /bin/sh -c cat && exit 42' Nov 25 18:13:23.682: INFO: rc: 1 Nov 25 18:13:23.682: INFO: Waiting for pod failure-3 to disappear Nov 25 18:13:23.722: INFO: Encountered non-retryable error while listing pods: Get "https://35.233.152.153/api/v1/namespaces/kubectl-3217/pods": dial tcp 35.233.152.153:443: connect: connection refused [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/25/22 18:13:23.722 Nov 25 18:13:23.722: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3217 delete --grace-period=0 --force -f -' Nov 25 18:13:23.841: INFO: rc: 1 Nov 25 18:13:23.841: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc0058ba4e0>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3217 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://35.233.152.153/api/v1/namespaces/kubectl-3217/pods/httpd\": dial tcp 35.233.152.153:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 25 18:13:23.842: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3217 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://35.233.152.153/api/v1/namespaces/kubectl-3217/pods/httpd": dial tcp 35.233.152.153:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc000e426e0?, 0x0?}, {0xc00477bea0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc00477bea0, 0xc}, {0xc004894000, 0x145}, {0xc000b69ec0?, 0x8?, 0x7f33a60cb3c8?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc004894000, 0x145}, {0xc00477bea0, 0xc}, {0xc0058ba380, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 25 18:13:23.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:13:23.881 STEP: Collecting events from namespace "kubectl-3217". 11/25/22 18:13:23.882 Nov 25 18:13:23.921: INFO: Unexpected error: failed to list events in namespace "kubectl-3217": <*url.Error | 0xc00309a030>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/kubectl-3217/events", Err: <*net.OpError | 0xc004790780>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005942540>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001306040>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:13:23.921: FAIL: failed to list events in namespace "kubectl-3217": Get "https://35.233.152.153/api/v1/namespaces/kubectl-3217/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc001e925c0, {0xc00477bea0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0019eb040}, {0xc00477bea0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001e92650?, {0xc00477bea0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc001255950) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00478a630?, 0xc0035befb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc001fb5748?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00478a630?, 0x29449fc?}, {0xae73300?, 0xc0035bef80?, 0x3a212e4?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-3217" for this suite. 11/25/22 18:13:23.922 Nov 25 18:13:23.961: FAIL: Couldn't delete ns: "kubectl-3217": Delete "https://35.233.152.153/api/v1/namespaces/kubectl-3217": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/kubectl-3217", Err:(*net.OpError)(0xc003ea8690)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001255950) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00478a5b0?, 0xc000014900?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc000530ad0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00478a5b0?, 0xc000724eb8?}, {0xae73300?, 0xc000051140?, 0xc000051150?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sAddon\supdate\sshould\spropagate\sadd\-on\sfile\schanges\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001139d10) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-cloud-provider-gcp] Addon update set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:14:29.309 Nov 25 18:14:29.309: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename addon-update-test 11/25/22 18:14:29.311 Nov 25 18:14:29.351: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:31.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:33.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:35.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:37.390: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:39.392: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:41.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:43.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:45.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:47.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:49.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:51.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:53.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:55.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:57.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:59.391: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:59.430: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:59.430: INFO: Unexpected error: <*errors.errorString | 0xc0001c1930>: { s: "timed out waiting for the condition", } Nov 25 18:14:59.430: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001139d10) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cloud-provider-gcp] Addon update test/e2e/framework/node/init/init.go:32 Nov 25 18:14:59.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-cloud-provider-gcp] Addon update test/e2e/cloud/gcp/addon_update.go:237 [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:14:59.47 [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\shandle\supdates\sto\sExternalTrafficPolicy\sfield$'
test/e2e/network/loadbalancer.go:1492 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1492 +0x155 There were additional failures detected after the initial failure: [FAILED] Nov 25 18:17:52.620: failed to list events in namespace "esipp-6364": Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:17:52.660: Couldn't delete ns: "esipp-6364": Delete "https://35.233.152.153/api/v1/namespaces/esipp-6364": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/esipp-6364", Err:(*net.OpError)(0xc00289c1e0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:11:43.112 Nov 25 18:11:43.112: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 18:11:43.114 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:11:43.336 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:11:43.447 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should handle updates to ExternalTrafficPolicy field test/e2e/network/loadbalancer.go:1480 STEP: creating a service esipp-6364/external-local-update with type=LoadBalancer 11/25/22 18:11:43.971 STEP: setting ExternalTrafficPolicy=Local 11/25/22 18:11:43.971 STEP: waiting for loadbalancer for service esipp-6364/external-local-update 11/25/22 18:11:44.107 Nov 25 18:11:44.107: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer Nov 25 18:13:00.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:02.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:04.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:06.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:08.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:10.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:12.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:14.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:16.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:18.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:20.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:22.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:24.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:26.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:28.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:30.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:32.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:34.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:36.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:38.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:40.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:42.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:44.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:46.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:48.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:50.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:52.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:54.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:56.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:58.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:00.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:02.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:04.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:06.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:08.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:10.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:12.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:14.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:16.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:18.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:20.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:22.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:24.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:26.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:28.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:30.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:32.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:34.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:36.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:38.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:40.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:42.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:44.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:46.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:48.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:50.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:52.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:54.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:56.209: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:58.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:15:00.210: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/services/external-local-update": dial tcp 35.233.152.153:443: connect: connection refused STEP: creating a pod to be part of the service external-local-update 11/25/22 18:16:18.213 Nov 25 18:16:18.278: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 18:16:18.322: INFO: Found all 1 pods Nov 25 18:16:18.322: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-update-shzz4] Nov 25 18:16:18.322: INFO: Waiting up to 2m0s for pod "external-local-update-shzz4" in namespace "esipp-6364" to be "running and ready" Nov 25 18:16:18.366: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 44.014464ms Nov 25 18:16:18.366: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:20.416: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094016063s Nov 25 18:16:20.416: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:22.478: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15561143s Nov 25 18:16:22.478: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:24.546: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224272119s Nov 25 18:16:24.546: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:26.426: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103767524s Nov 25 18:16:26.426: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:28.475: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.153029136s Nov 25 18:16:28.475: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:30.428: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.105931389s Nov 25 18:16:30.428: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:32.430: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.108244302s Nov 25 18:16:32.430: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:34.426: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.104078321s Nov 25 18:16:34.426: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:36.416: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.093591868s Nov 25 18:16:36.416: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:38.441: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.11925095s Nov 25 18:16:38.441: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:40.433: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.111332738s Nov 25 18:16:40.433: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:42.414: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.091903932s Nov 25 18:16:42.414: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' ------------------------------ Progress Report for Ginkgo Process #11 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 5m0.719s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1480 At [By Step] creating a pod to be part of the service external-local-update (Step Runtime: 25.619s) test/e2e/framework/service/jig.go:234 Spec Goroutine goroutine 8195 [chan receive] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x801de88?, 0xc0032a2680}, {0xc001dd79c0, 0xa}, {0xc00166f310, 0x1, 0x1}, 0x1bf08eb000, 0x78965c0, {0x75ee704, ...}) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReady(...) test/e2e/framework/pod/resource.go:501 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForPodsReady(0xc0047f8be0?, {0xc00166f310?, 0x1, 0x0?}) test/e2e/framework/service/jig.go:828 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).Run(0xc0047f8be0, 0x0) test/e2e/framework/service/jig.go:753 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0047f8be0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:235 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001e1f380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:16:44.423: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.100797232s Nov 25 18:16:44.423: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:46.425: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.102771443s Nov 25 18:16:46.425: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:48.423: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.100911735s Nov 25 18:16:48.423: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:50.440: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 32.118246252s Nov 25 18:16:50.440: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:52.419: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.096947374s Nov 25 18:16:52.419: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:54.435: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 36.113067512s Nov 25 18:16:54.435: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:56.422: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 38.100346904s Nov 25 18:16:56.422: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:16:58.425: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 40.10266298s Nov 25 18:16:58.425: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:00.431: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.108469016s Nov 25 18:17:00.431: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:02.425: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 44.102839093s Nov 25 18:17:02.425: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' ------------------------------ Progress Report for Ginkgo Process #11 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 5m20.721s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 5m20.003s) test/e2e/network/loadbalancer.go:1480 At [By Step] creating a pod to be part of the service external-local-update (Step Runtime: 45.62s) test/e2e/framework/service/jig.go:234 Spec Goroutine goroutine 8195 [chan receive] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x801de88?, 0xc0032a2680}, {0xc001dd79c0, 0xa}, {0xc00166f310, 0x1, 0x1}, 0x1bf08eb000, 0x78965c0, {0x75ee704, ...}) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReady(...) test/e2e/framework/pod/resource.go:501 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForPodsReady(0xc0047f8be0?, {0xc00166f310?, 0x1, 0x0?}) test/e2e/framework/service/jig.go:828 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).Run(0xc0047f8be0, 0x0) test/e2e/framework/service/jig.go:753 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0047f8be0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:235 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001e1f380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:17:04.452: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 46.130072852s Nov 25 18:17:04.452: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:06.422: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 48.100392648s Nov 25 18:17:06.423: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:08.428: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 50.105512396s Nov 25 18:17:08.428: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:10.415: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 52.092651806s Nov 25 18:17:10.415: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:12.437: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 54.115342037s Nov 25 18:17:12.437: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:14.422: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 56.099952433s Nov 25 18:17:14.422: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:16.422: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 58.099674594s Nov 25 18:17:16.422: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:18.468: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.145502066s Nov 25 18:17:18.468: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:20.416: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.094263605s Nov 25 18:17:20.416: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:22.440: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.118143639s Nov 25 18:17:22.440: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' ------------------------------ Progress Report for Ginkgo Process #11 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 5m40.723s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 5m40.004s) test/e2e/network/loadbalancer.go:1480 At [By Step] creating a pod to be part of the service external-local-update (Step Runtime: 1m5.622s) test/e2e/framework/service/jig.go:234 Spec Goroutine goroutine 8195 [chan receive] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x801de88?, 0xc0032a2680}, {0xc001dd79c0, 0xa}, {0xc00166f310, 0x1, 0x1}, 0x1bf08eb000, 0x78965c0, {0x75ee704, ...}) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReady(...) test/e2e/framework/pod/resource.go:501 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForPodsReady(0xc0047f8be0?, {0xc00166f310?, 0x1, 0x0?}) test/e2e/framework/service/jig.go:828 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).Run(0xc0047f8be0, 0x0) test/e2e/framework/service/jig.go:753 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0047f8be0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:235 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001e1f380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:17:24.482: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.160059234s Nov 25 18:17:24.482: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:26.420: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.097748088s Nov 25 18:17:26.420: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:28.447: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.125172283s Nov 25 18:17:28.447: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:30.418: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.096138078s Nov 25 18:17:30.418: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:32.409: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.086813679s Nov 25 18:17:32.409: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:34.422: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.099954185s Nov 25 18:17:34.422: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:36.411: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.08933379s Nov 25 18:17:36.411: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:38.443: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.120732258s Nov 25 18:17:38.443: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:40.422: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.099856385s Nov 25 18:17:40.422: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:42.422: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.099995372s Nov 25 18:17:42.422: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' ------------------------------ Progress Report for Ginkgo Process #11 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 6m0.725s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 6m0.007s) test/e2e/network/loadbalancer.go:1480 At [By Step] creating a pod to be part of the service external-local-update (Step Runtime: 1m25.624s) test/e2e/framework/service/jig.go:234 Spec Goroutine goroutine 8195 [chan receive, 2 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x801de88?, 0xc0032a2680}, {0xc001dd79c0, 0xa}, {0xc00166f310, 0x1, 0x1}, 0x1bf08eb000, 0x78965c0, {0x75ee704, ...}) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReady(...) test/e2e/framework/pod/resource.go:501 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForPodsReady(0xc0047f8be0?, {0xc00166f310?, 0x1, 0x0?}) test/e2e/framework/service/jig.go:828 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).Run(0xc0047f8be0, 0x0) test/e2e/framework/service/jig.go:753 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0047f8be0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:235 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001e1f380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:17:44.430: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.107749147s Nov 25 18:17:44.430: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:46.410: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.088140547s Nov 25 18:17:46.410: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:48.426: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.103492918s Nov 25 18:17:48.426: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:50.416: INFO: Pod "external-local-update-shzz4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.093581858s Nov 25 18:17:50.416: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-shzz4' on '' to be 'Running' but was 'Pending' Nov 25 18:17:52.406: INFO: Encountered non-retryable error while getting pod esipp-6364/external-local-update-shzz4: Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/pods/external-local-update-shzz4": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:17:52.406: INFO: Pod external-local-update-shzz4 failed to be running and ready. Nov 25 18:17:52.406: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [external-local-update-shzz4] Nov 25 18:17:52.407: INFO: Unexpected error: <*errors.errorString | 0xc000a58360>: { s: "failed waiting for pods to be running: timeout waiting for 1 pods to be ready", } Nov 25 18:17:52.407: FAIL: failed waiting for pods to be running: timeout waiting for 1 pods to be ready Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1492 +0x155 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 18:17:52.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 18:17:52.446: INFO: Output of kubectl describe svc: Nov 25 18:17:52.447: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=esipp-6364 describe svc --namespace=esipp-6364' Nov 25 18:17:52.580: INFO: rc: 1 Nov 25 18:17:52.580: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:17:52.581 STEP: Collecting events from namespace "esipp-6364". 11/25/22 18:17:52.581 Nov 25 18:17:52.620: INFO: Unexpected error: failed to list events in namespace "esipp-6364": <*url.Error | 0xc001acad80>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/esipp-6364/events", Err: <*net.OpError | 0xc003c98640>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003ecb590>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0014d4300>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:17:52.620: FAIL: failed to list events in namespace "esipp-6364": Get "https://35.233.152.153/api/v1/namespaces/esipp-6364/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0015f45c0, {0xc001dd79c0, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0032a2680}, {0xc001dd79c0, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0015f4650?, {0xc001dd79c0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0007b6000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001f362a0?, 0xc00360ef50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00360ef40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001f362a0?, 0x2622c40?}, {0xae73300?, 0xc00360ef80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-6364" for this suite. 11/25/22 18:17:52.621 Nov 25 18:17:52.660: FAIL: Couldn't delete ns: "esipp-6364": Delete "https://35.233.152.153/api/v1/namespaces/esipp-6364": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/esipp-6364", Err:(*net.OpError)(0xc00289c1e0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0007b6000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001f36220?, 0xc004274188?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x71ad140?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001f36220?, 0x78965c0?}, {0xae73300?, 0xc0032a2680?, 0xc001dd79c0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\sonly\starget\snodes\swith\sendpoints$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc00115ec40, {0x75c6f7c, 0x9}, 0xc003777ec0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc00115ec40, 0x7fa1fc440d60?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc00115ec40, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000f49770, {0x0, 0x0, 0xc001920650?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 There were additional failures detected after the initial failure: [FAILED] Nov 25 18:13:00.566: failed to list events in namespace "esipp-7661": Get "https://35.233.152.153/api/v1/namespaces/esipp-7661/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:13:00.606: Couldn't delete ns: "esipp-7661": Delete "https://35.233.152.153/api/v1/namespaces/esipp-7661": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/esipp-7661", Err:(*net.OpError)(0xc003a26c80)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:11:15.03 Nov 25 18:11:15.031: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 18:11:15.033 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:11:15.158 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:11:15.238 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should only target nodes with endpoints test/e2e/network/loadbalancer.go:1346 STEP: creating a service esipp-7661/external-local-nodes with type=LoadBalancer 11/25/22 18:11:15.523 STEP: setting ExternalTrafficPolicy=Local 11/25/22 18:11:15.523 STEP: waiting for loadbalancer for service esipp-7661/external-local-nodes 11/25/22 18:11:15.595 Nov 25 18:11:15.595: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: waiting for loadbalancer for service esipp-7661/external-local-nodes 11/25/22 18:12:51.694 Nov 25 18:12:51.694: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: Performing setup for networking test in namespace esipp-7661 11/25/22 18:12:51.737 STEP: creating a selector 11/25/22 18:12:51.738 STEP: Creating the service pods in kubernetes 11/25/22 18:12:51.738 Nov 25 18:12:51.738: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 18:12:52.158: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-7661" to be "running and ready" Nov 25 18:12:52.250: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 92.651069ms Nov 25 18:12:52.250: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:12:54.364: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206649024s Nov 25 18:12:54.364: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:12:56.299: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.140992148s Nov 25 18:12:56.299: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:58.340: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.182146998s Nov 25 18:12:58.340: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:13:00.291: INFO: Encountered non-retryable error while getting pod esipp-7661/netserver-0: Get "https://35.233.152.153/api/v1/namespaces/esipp-7661/pods/netserver-0": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:00.291: INFO: Unexpected error: <*fmt.wrapError | 0xc000f41d00>: { msg: "error while waiting for pod esipp-7661/netserver-0 to be running and ready: Get \"https://35.233.152.153/api/v1/namespaces/esipp-7661/pods/netserver-0\": dial tcp 35.233.152.153:443: connect: connection refused", err: <*url.Error | 0xc002f8b680>{ Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/esipp-7661/pods/netserver-0", Err: <*net.OpError | 0xc0049d2a50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002f8b500>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000f41cc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 25 18:13:00.291: FAIL: error while waiting for pod esipp-7661/netserver-0 to be running and ready: Get "https://35.233.152.153/api/v1/namespaces/esipp-7661/pods/netserver-0": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc00115ec40, {0x75c6f7c, 0x9}, 0xc003777ec0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc00115ec40, 0x7fa1fc440d60?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc00115ec40, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000f49770, {0x0, 0x0, 0xc001920650?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 Nov 25 18:13:00.330: INFO: Unexpected error: <*errors.errorString | 0xc0018f5d40>: { s: "failed to get Service \"external-local-nodes\": Get \"https://35.233.152.153/api/v1/namespaces/esipp-7661/services/external-local-nodes\": dial tcp 35.233.152.153:443: connect: connection refused", } Nov 25 18:13:00.330: FAIL: failed to get Service "external-local-nodes": Get "https://35.233.152.153/api/v1/namespaces/esipp-7661/services/external-local-nodes": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.5.2() test/e2e/network/loadbalancer.go:1366 +0xae panic({0x70eb7e0, 0xc00011f0a0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0007d4820, 0xd0}, {0xc001b1b700?, 0xc0007d4820?, 0xc001b1b728?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3f20, 0xc000f41d00}, {0x0?, 0xc0034b8d50?, 0xc000ad7820?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc00115ec40, {0x75c6f7c, 0x9}, 0xc003777ec0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc00115ec40, 0x7fa1fc440d60?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc00115ec40, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000f49770, {0x0, 0x0, 0xc001920650?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 18:13:00.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 18:13:00.370: INFO: Output of kubectl describe svc: Nov 25 18:13:00.370: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=esipp-7661 describe svc --namespace=esipp-7661' Nov 25 18:13:00.526: INFO: rc: 1 Nov 25 18:13:00.526: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:13:00.526 STEP: Collecting events from namespace "esipp-7661". 11/25/22 18:13:00.526 Nov 25 18:13:00.566: INFO: Unexpected error: failed to list events in namespace "esipp-7661": <*url.Error | 0xc0047fe150>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/esipp-7661/events", Err: <*net.OpError | 0xc0049d2d20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003622390>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0000da640>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:13:00.566: FAIL: failed to list events in namespace "esipp-7661": Get "https://35.233.152.153/api/v1/namespaces/esipp-7661/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc001b165c0, {0xc0034b8d50, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc000e28000}, {0xc0034b8d50, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001b16650?, {0xc0034b8d50?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000f49770) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001873010?, 0xc004513f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc004513f40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001873010?, 0x2622c40?}, {0xae73300?, 0xc004513f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-7661" for this suite. 11/25/22 18:13:00.566 Nov 25 18:13:00.606: FAIL: Couldn't delete ns: "esipp-7661": Delete "https://35.233.152.153/api/v1/namespaces/esipp-7661": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/esipp-7661", Err:(*net.OpError)(0xc003a26c80)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f49770) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001872f70?, 0xc003ff1fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001872f70?, 0x0?}, {0xae73300?, 0x5?, 0xc003227ae8?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=LoadBalancer$'
test/e2e/network/loadbalancer.go:1272 k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1272 +0xd8from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:07:14.467 Nov 25 18:07:14.467: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 18:07:14.469 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:07:14.686 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:07:14.826 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=LoadBalancer test/e2e/network/loadbalancer.go:1266 STEP: creating a service esipp-7698/external-local-lb with type=LoadBalancer 11/25/22 18:07:15.085 STEP: setting ExternalTrafficPolicy=Local 11/25/22 18:07:15.086 STEP: waiting for loadbalancer for service esipp-7698/external-local-lb 11/25/22 18:07:15.216 Nov 25 18:07:15.216: INFO: Waiting up to 15m0s for service "external-local-lb" to have a LoadBalancer STEP: creating a pod to be part of the service external-local-lb 11/25/22 18:07:55.341 Nov 25 18:07:55.419: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 18:07:55.483: INFO: Found all 1 pods Nov 25 18:07:55.483: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-lb-2b47n] Nov 25 18:07:55.483: INFO: Waiting up to 2m0s for pod "external-local-lb-2b47n" in namespace "esipp-7698" to be "running and ready" Nov 25 18:07:55.540: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 56.985817ms Nov 25 18:07:55.540: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:07:57.620: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13652711s Nov 25 18:07:57.620: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:07:59.662: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179006356s Nov 25 18:07:59.662: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:01.595: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112267542s Nov 25 18:08:01.595: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:03.646: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163292914s Nov 25 18:08:03.646: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:05.595: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.112289461s Nov 25 18:08:05.595: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:07.605: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 12.122075875s Nov 25 18:08:07.605: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:09.606: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 14.122465924s Nov 25 18:08:09.606: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:11.593: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 16.11027935s Nov 25 18:08:11.593: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:13.727: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 18.244099209s Nov 25 18:08:13.727: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:15.600: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 20.11704289s Nov 25 18:08:15.600: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:17.600: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 22.11721298s Nov 25 18:08:17.600: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:19.611: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 24.127914563s Nov 25 18:08:19.611: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:21.602: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 26.118595987s Nov 25 18:08:21.602: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:23.602: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 28.118564903s Nov 25 18:08:23.602: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:25.597: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 30.114430853s Nov 25 18:08:25.597: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:27.588: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 32.10506828s Nov 25 18:08:27.588: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:29.632: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 34.1487457s Nov 25 18:08:29.632: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:31.616: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 36.133025908s Nov 25 18:08:31.616: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:33.601: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 38.118234083s Nov 25 18:08:33.601: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:35.620: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 40.13676525s Nov 25 18:08:35.620: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:37.595: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 42.112196022s Nov 25 18:08:37.595: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:39.611: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 44.127829689s Nov 25 18:08:39.611: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:41.599: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 46.115924998s Nov 25 18:08:41.599: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:43.600: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 48.116932433s Nov 25 18:08:43.600: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:45.592: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 50.108742682s Nov 25 18:08:45.592: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:47.624: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 52.141121969s Nov 25 18:08:47.624: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:49.630: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 54.14720959s Nov 25 18:08:49.630: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:51.608: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 56.124558524s Nov 25 18:08:51.608: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:53.638: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 58.154762579s Nov 25 18:08:53.638: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:55.600: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.11669071s Nov 25 18:08:55.600: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:57.609: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.126234324s Nov 25 18:08:57.609: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:08:59.619: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.136444378s Nov 25 18:08:59.620: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:01.603: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.120070448s Nov 25 18:09:01.603: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:03.622: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.138970126s Nov 25 18:09:03.622: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:05.602: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.119181578s Nov 25 18:09:05.602: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:07.595: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.112053766s Nov 25 18:09:07.595: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:09.597: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.113994667s Nov 25 18:09:09.597: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:11.593: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.110361658s Nov 25 18:09:11.593: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:13.692: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.208626878s Nov 25 18:09:13.692: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:15.597: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.11353638s Nov 25 18:09:15.597: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:17.610: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.126619513s Nov 25 18:09:17.610: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:19.625: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.141797762s Nov 25 18:09:19.625: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:21.613: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.129699202s Nov 25 18:09:21.613: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:23.901: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.417479943s Nov 25 18:09:23.901: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:25.595: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.111485705s Nov 25 18:09:25.595: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:27.617: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.134345427s Nov 25 18:09:27.617: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:29.690: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.207344927s Nov 25 18:09:29.690: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:31.593: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.110323203s Nov 25 18:09:31.593: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:33.648: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.165043048s Nov 25 18:09:33.648: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:35.619: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.135798212s Nov 25 18:09:35.619: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:37.596: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.112977432s Nov 25 18:09:37.596: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:39.609: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.126170437s Nov 25 18:09:39.609: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:41.600: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.116854302s Nov 25 18:09:41.600: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:43.655: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.172293359s Nov 25 18:09:43.655: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:45.615: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.131608994s Nov 25 18:09:45.615: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:47.597: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.114102371s Nov 25 18:09:47.597: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:49.636: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.152751218s Nov 25 18:09:49.636: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:51.594: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.111268627s Nov 25 18:09:51.594: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:53.601: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.118065723s Nov 25 18:09:53.601: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:55.608: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.12516089s Nov 25 18:09:55.608: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:55.665: INFO: Pod "external-local-lb-2b47n": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.181702757s Nov 25 18:09:55.665: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-2b47n' on '' to be 'Running' but was 'Pending' Nov 25 18:09:55.665: INFO: Pod external-local-lb-2b47n failed to be running and ready. Nov 25 18:09:55.665: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [external-local-lb-2b47n] Nov 25 18:09:55.665: INFO: Unexpected error: <*errors.errorString | 0xc0045d5130>: { s: "failed waiting for pods to be running: timeout waiting for 1 pods to be ready", } Nov 25 18:09:55.665: FAIL: failed waiting for pods to be running: timeout waiting for 1 pods to be ready Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1272 +0xd8 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 18:09:55.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 18:09:55.727: INFO: Output of kubectl describe svc: Nov 25 18:09:55.727: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=esipp-7698 describe svc --namespace=esipp-7698' Nov 25 18:09:56.252: INFO: stderr: "" Nov 25 18:09:56.252: INFO: stdout: "Name: external-local-lb\nNamespace: esipp-7698\nLabels: testid=external-local-lb-281b11f3-1fb8-4725-a1dd-7bb6a3a00a9e\nAnnotations: <none>\nSelector: testid=external-local-lb-281b11f3-1fb8-4725-a1dd-7bb6a3a00a9e\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.155.118\nIPs: 10.0.155.118\nLoadBalancer Ingress: 34.105.67.77\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nNodePort: <unset> 32397/TCP\nEndpoints: <none>\nSession Affinity: None\nExternal Traffic Policy: Local\nHealthCheck NodePort: 31285\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 2m41s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 2m2s service-controller Ensured load balancer\n" Nov 25 18:09:56.252: INFO: Name: external-local-lb Namespace: esipp-7698 Labels: testid=external-local-lb-281b11f3-1fb8-4725-a1dd-7bb6a3a00a9e Annotations: <none> Selector: testid=external-local-lb-281b11f3-1fb8-4725-a1dd-7bb6a3a00a9e Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.155.118 IPs: 10.0.155.118 LoadBalancer Ingress: 34.105.67.77 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 32397/TCP Endpoints: <none> Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 31285 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 2m41s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 2m2s service-controller Ensured load balancer [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:09:56.253 STEP: Collecting events from namespace "esipp-7698". 11/25/22 18:09:56.253 STEP: Found 3 events. 11/25/22 18:09:56.319 Nov 25 18:09:56.319: INFO: At 2022-11-25 18:07:15 +0000 UTC - event for external-local-lb: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 18:09:56.319: INFO: At 2022-11-25 18:07:54 +0000 UTC - event for external-local-lb: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 25 18:09:56.319: INFO: At 2022-11-25 18:07:55 +0000 UTC - event for external-local-lb: {replication-controller } SuccessfulCreate: Created pod: external-local-lb-2b47n Nov 25 18:09:56.368: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 18:09:56.368: INFO: external-local-lb-2b47n Pending [] Nov 25 18:09:56.368: INFO: Nov 25 18:09:56.471: INFO: Logging node info for node bootstrap-e2e-master Nov 25 18:09:56.522: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master eb94e66b-ae91-494a-9e40-bf2a53869582 6128 0 2022-11-25 17:55:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 17:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 17:55:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 17:55:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 18:06:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 18:06:24 +0000 UTC,LastTransitionTime:2022-11-25 17:55:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 18:06:24 +0000 UTC,LastTransitionTime:2022-11-25 17:55:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 18:06:24 +0000 UTC,LastTransitionTime:2022-11-25 17:55:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 18:06:24 +0000 UTC,LastTransitionTime:2022-11-25 17:55:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.233.152.153,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:58899ad1ba7a6711fcb2fb23af2e2912,SystemUUID:58899ad1-ba7a-6711-fcb2-fb23af2e2912,BootID:690b7c55-8447-49d5-8a09-10c87046c77c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 18:09:56.522: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 18:09:56.581: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 18:09:56.664: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 17:54:47 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:56.664: INFO: Container konnectivity-server-container ready: true, restart count 1 Nov 25 18:09:56.664: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 17:54:47 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:56.664: INFO: Container kube-controller-manager ready: true, restart count 5 Nov 25 18:09:56.664: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 17:55:04 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:56.664: INFO: Container kube-addon-manager ready: false, restart count 4 Nov 25 18:09:56.664: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 17:55:04 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:56.664: INFO: Container l7-lb-controller ready: true, restart count 6 Nov 25 18:09:56.664: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 17:54:47 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:56.664: INFO: Container kube-apiserver ready: true, restart count 2 Nov 25 18:09:56.664: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 17:54:47 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:56.664: INFO: Container kube-scheduler ready: false, restart count 5 Nov 25 18:09:56.664: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 17:54:47 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:56.664: INFO: Container etcd-container ready: true, restart count 2 Nov 25 18:09:56.664: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 17:54:47 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:56.664: INFO: Container etcd-container ready: true, restart count 2 Nov 25 18:09:56.664: INFO: metadata-proxy-v0.1-2q8s6 started at 2022-11-25 17:55:31 +0000 UTC (0+2 container statuses recorded) Nov 25 18:09:56.664: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 18:09:56.664: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 18:09:56.948: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 18:09:56.948: INFO: Logging node info for node bootstrap-e2e-minion-group-11zh Nov 25 18:09:56.999: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-11zh 51498931-fa93-403b-99dc-c4f0f6b81384 8304 0 2022-11-25 17:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-11zh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-11zh topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-6564":"bootstrap-e2e-minion-group-11zh","csi-hostpath-provisioning-8377":"bootstrap-e2e-minion-group-11zh","csi-mock-csi-mock-volumes-5090":"bootstrap-e2e-minion-group-11zh","csi-mock-csi-mock-volumes-6054":"bootstrap-e2e-minion-group-11zh","csi-mock-csi-mock-volumes-729":"bootstrap-e2e-minion-group-11zh"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 17:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 17:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 18:05:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 18:07:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 18:09:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-minion-group-11zh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 18:07:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 18:07:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 18:07:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 18:07:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.210.102,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-11zh.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-11zh.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aa71c72a5648d3deaeffa3d5a75ed1ea,SystemUUID:aa71c72a-5648-d3de-aeff-a3d5a75ed1ea,BootID:4402c9e1-cf2e-4e88-9a9b-3152017f4dc0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8377^482d555f-6ceb-11ed-bfea-9aff4ac17fc3],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-8377^482d555f-6ceb-11ed-bfea-9aff4ac17fc3,DevicePath:,},},Config:nil,},} Nov 25 18:09:57.000: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-11zh Nov 25 18:09:57.051: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-11zh Nov 25 18:09:57.167: INFO: netserver-0 started at 2022-11-25 18:02:24 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container webserver ready: true, restart count 0 Nov 25 18:09:57.167: INFO: var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0 started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container dapi-container ready: false, restart count 0 Nov 25 18:09:57.167: INFO: pod-secrets-e6d7e190-1317-466d-9a02-a9ecc45d3b08 started at 2022-11-25 17:59:06 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container creates-volume-test ready: false, restart count 0 Nov 25 18:09:57.167: INFO: csi-mockplugin-attacher-0 started at 2022-11-25 17:59:08 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container csi-attacher ready: true, restart count 3 Nov 25 18:09:57.167: INFO: volume-snapshot-controller-0 started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container volume-snapshot-controller ready: true, restart count 5 Nov 25 18:09:57.167: INFO: hostexec-bootstrap-e2e-minion-group-11zh-g52kk started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container agnhost-container ready: false, restart count 3 Nov 25 18:09:57.167: INFO: pod-configmaps-5ec8928e-ebc0-45ba-a6e5-ed8f240d753b started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 18:09:57.167: INFO: lb-internal-tc75f started at 2022-11-25 18:07:14 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container netexec ready: true, restart count 2 Nov 25 18:09:57.167: INFO: csi-mockplugin-0 started at 2022-11-25 17:59:08 +0000 UTC (0+4 container statuses recorded) Nov 25 18:09:57.167: INFO: Container busybox ready: false, restart count 5 Nov 25 18:09:57.167: INFO: Container csi-provisioner ready: false, restart count 5 Nov 25 18:09:57.167: INFO: Container driver-registrar ready: false, restart count 5 Nov 25 18:09:57.167: INFO: Container mock ready: false, restart count 5 Nov 25 18:09:57.167: INFO: back-off-cap started at 2022-11-25 18:07:28 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container back-off-cap ready: false, restart count 4 Nov 25 18:09:57.167: INFO: konnectivity-agent-r2744 started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container konnectivity-agent ready: true, restart count 6 Nov 25 18:09:57.167: INFO: csi-mockplugin-0 started at 2022-11-25 17:57:31 +0000 UTC (0+3 container statuses recorded) Nov 25 18:09:57.167: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 18:09:57.167: INFO: Container driver-registrar ready: true, restart count 1 Nov 25 18:09:57.167: INFO: Container mock ready: true, restart count 1 Nov 25 18:09:57.167: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:02:01 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.167: INFO: Container csi-attacher ready: true, restart count 3 Nov 25 18:09:57.167: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 18:09:57.167: INFO: Container csi-resizer ready: true, restart count 3 Nov 25 18:09:57.167: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 25 18:09:57.167: INFO: Container hostpath ready: true, restart count 3 Nov 25 18:09:57.167: INFO: Container liveness-probe ready: true, restart count 3 Nov 25 18:09:57.167: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 25 18:09:57.167: INFO: kube-proxy-bootstrap-e2e-minion-group-11zh started at 2022-11-25 17:55:35 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container kube-proxy ready: false, restart count 6 Nov 25 18:09:57.167: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:06:36 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.167: INFO: Container csi-attacher ready: false, restart count 1 Nov 25 18:09:57.167: INFO: Container csi-provisioner ready: false, restart count 1 Nov 25 18:09:57.167: INFO: Container csi-resizer ready: false, restart count 1 Nov 25 18:09:57.167: INFO: Container csi-snapshotter ready: false, restart count 1 Nov 25 18:09:57.167: INFO: Container hostpath ready: false, restart count 1 Nov 25 18:09:57.167: INFO: Container liveness-probe ready: false, restart count 1 Nov 25 18:09:57.167: INFO: Container node-driver-registrar ready: false, restart count 1 Nov 25 18:09:57.167: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:07:09 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.167: INFO: Container csi-attacher ready: false, restart count 1 Nov 25 18:09:57.167: INFO: Container csi-provisioner ready: false, restart count 1 Nov 25 18:09:57.167: INFO: Container csi-resizer ready: false, restart count 1 Nov 25 18:09:57.167: INFO: Container csi-snapshotter ready: false, restart count 1 Nov 25 18:09:57.167: INFO: Container hostpath ready: false, restart count 1 Nov 25 18:09:57.167: INFO: Container liveness-probe ready: false, restart count 1 Nov 25 18:09:57.167: INFO: Container node-driver-registrar ready: false, restart count 1 Nov 25 18:09:57.167: INFO: csi-mockplugin-attacher-0 started at 2022-11-25 17:57:31 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container csi-attacher ready: false, restart count 4 Nov 25 18:09:57.167: INFO: pod-secrets-44e1079e-56d1-4e3e-86b7-45008a0901ef started at 2022-11-25 17:59:06 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container creates-volume-test ready: false, restart count 0 Nov 25 18:09:57.167: INFO: inclusterclient started at 2022-11-25 18:02:26 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container inclusterclient ready: false, restart count 0 Nov 25 18:09:57.167: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:01:50 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.167: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 18:09:57.167: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 18:09:57.167: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 18:09:57.167: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 18:09:57.167: INFO: Container hostpath ready: true, restart count 2 Nov 25 18:09:57.167: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 18:09:57.167: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 18:09:57.167: INFO: metadata-proxy-v0.1-gzp2t started at 2022-11-25 17:55:36 +0000 UTC (0+2 container statuses recorded) Nov 25 18:09:57.167: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 18:09:57.167: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 18:09:57.167: INFO: pod-back-off-image started at 2022-11-25 18:06:05 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container back-off ready: false, restart count 5 Nov 25 18:09:57.167: INFO: csi-mockplugin-0 started at 2022-11-25 18:06:31 +0000 UTC (0+3 container statuses recorded) Nov 25 18:09:57.167: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 18:09:57.167: INFO: Container driver-registrar ready: true, restart count 3 Nov 25 18:09:57.167: INFO: Container mock ready: true, restart count 3 Nov 25 18:09:57.167: INFO: l7-default-backend-8549d69d99-c2mnz started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 18:09:57.167: INFO: pvc-volume-tester-7q9mv started at 2022-11-25 17:57:50 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container volume-tester ready: false, restart count 0 Nov 25 18:09:57.167: INFO: external-local-nodeport-lkwl7 started at 2022-11-25 18:02:16 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container netexec ready: true, restart count 2 Nov 25 18:09:57.167: INFO: csi-mockplugin-0 started at 2022-11-25 17:59:08 +0000 UTC (0+3 container statuses recorded) Nov 25 18:09:57.167: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 18:09:57.167: INFO: Container driver-registrar ready: true, restart count 3 Nov 25 18:09:57.167: INFO: Container mock ready: true, restart count 3 Nov 25 18:09:57.167: INFO: hostexec-bootstrap-e2e-minion-group-11zh-rtfbj started at 2022-11-25 17:59:06 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 18:09:57.167: INFO: affinity-lb-transition-6qqjw started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.167: INFO: Container affinity-lb-transition ready: true, restart count 2 Nov 25 18:09:57.167: INFO: pod-subpath-test-dynamicpv-g5m5 started at 2022-11-25 18:02:12 +0000 UTC (1+2 container statuses recorded) Nov 25 18:09:57.167: INFO: Init container init-volume-dynamicpv-g5m5 ready: true, restart count 2 Nov 25 18:09:57.167: INFO: Container test-container-subpath-dynamicpv-g5m5 ready: false, restart count 4 Nov 25 18:09:57.167: INFO: Container test-container-volume-dynamicpv-g5m5 ready: false, restart count 4 Nov 25 18:09:57.512: INFO: Latency metrics for node bootstrap-e2e-minion-group-11zh Nov 25 18:09:57.512: INFO: Logging node info for node bootstrap-e2e-minion-group-4mzt Nov 25 18:09:57.561: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4mzt 22649193-0b27-417f-8621-b5ea24d332ed 8337 0 2022-11-25 17:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4mzt kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4mzt topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-1266":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-multivolume-8912":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-multivolume-9":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-multivolume-9968":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-provisioning-4943":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-provisioning-7709":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-provisioning-924":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-volumemode-4509":"bootstrap-e2e-minion-group-4mzt"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 17:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 17:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 18:05:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 18:08:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 18:09:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-minion-group-4mzt,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 18:08:20 +0000 UTC,LastTransitionTime:2022-11-25 17:55:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 18:08:20 +0000 UTC,LastTransitionTime:2022-11-25 17:55:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 18:08:20 +0000 UTC,LastTransitionTime:2022-11-25 17:55:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 18:08:20 +0000 UTC,LastTransitionTime:2022-11-25 17:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.242.250,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4mzt.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4mzt.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1e3433884817dce77b68706e93091a61,SystemUUID:1e343388-4817-dce7-7b68-706e93091a61,BootID:17688f51-d17a-4208-ac49-46ee5ba23c29,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-8912^117d612e-6cec-11ed-99c9-225cdbe59089 kubernetes.io/csi/csi-hostpath-multivolume-9^ae0cfa70-6cea-11ed-b2be-0675efa82cc9 kubernetes.io/csi/csi-hostpath-provisioning-7709^a8b285e7-6cea-11ed-bbb6-5acdec970e3c kubernetes.io/csi/csi-mock-csi-mock-volumes-5248^abd0f9bb-6cea-11ed-bb76-0e817f325504],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-9^ae0cfa70-6cea-11ed-b2be-0675efa82cc9,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7709^a8b285e7-6cea-11ed-bbb6-5acdec970e3c,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8912^117d612e-6cec-11ed-99c9-225cdbe59089,DevicePath:,},},Config:nil,},} Nov 25 18:09:57.562: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-4mzt Nov 25 18:09:57.613: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4mzt Nov 25 18:09:57.742: INFO: metadata-proxy-v0.1-27ttr started at 2022-11-25 17:55:34 +0000 UTC (0+2 container statuses recorded) Nov 25 18:09:57.742: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 18:09:57.742: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 18:09:57.742: INFO: pod-subpath-test-dynamicpv-l45p started at 2022-11-25 17:57:43 +0000 UTC (1+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Init container init-volume-dynamicpv-l45p ready: true, restart count 0 Nov 25 18:09:57.742: INFO: Container test-container-subpath-dynamicpv-l45p ready: false, restart count 0 Nov 25 18:09:57.742: INFO: hostexec-bootstrap-e2e-minion-group-4mzt-9bww2 started at 2022-11-25 17:57:42 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container agnhost-container ready: true, restart count 4 Nov 25 18:09:57.742: INFO: csi-hostpathplugin-0 started at 2022-11-25 17:57:46 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.742: INFO: Container csi-attacher ready: true, restart count 6 Nov 25 18:09:57.742: INFO: Container csi-provisioner ready: true, restart count 6 Nov 25 18:09:57.742: INFO: Container csi-resizer ready: true, restart count 6 Nov 25 18:09:57.742: INFO: Container csi-snapshotter ready: true, restart count 6 Nov 25 18:09:57.742: INFO: Container hostpath ready: true, restart count 6 Nov 25 18:09:57.742: INFO: Container liveness-probe ready: true, restart count 6 Nov 25 18:09:57.742: INFO: Container node-driver-registrar ready: true, restart count 6 Nov 25 18:09:57.742: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:06:09 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.742: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container hostpath ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 18:09:57.742: INFO: konnectivity-agent-57t6m started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 25 18:09:57.742: INFO: mutability-test-vtnxz started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container netexec ready: false, restart count 6 Nov 25 18:09:57.742: INFO: csi-mockplugin-0 started at 2022-11-25 17:57:31 +0000 UTC (0+3 container statuses recorded) Nov 25 18:09:57.742: INFO: Container csi-provisioner ready: false, restart count 6 Nov 25 18:09:57.742: INFO: Container driver-registrar ready: false, restart count 6 Nov 25 18:09:57.742: INFO: Container mock ready: false, restart count 6 Nov 25 18:09:57.742: INFO: csi-hostpathplugin-0 started at 2022-11-25 17:57:31 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.742: INFO: Container csi-attacher ready: true, restart count 4 Nov 25 18:09:57.742: INFO: Container csi-provisioner ready: true, restart count 4 Nov 25 18:09:57.742: INFO: Container csi-resizer ready: true, restart count 4 Nov 25 18:09:57.742: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 25 18:09:57.742: INFO: Container hostpath ready: true, restart count 4 Nov 25 18:09:57.742: INFO: Container liveness-probe ready: true, restart count 4 Nov 25 18:09:57.742: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 25 18:09:57.742: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:07:38 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.742: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container hostpath ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 18:09:57.742: INFO: pod-d53feab5-6d1a-43c4-acc8-7e5a5ed95fb0 started at 2022-11-25 18:07:49 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container write-pod ready: false, restart count 0 Nov 25 18:09:57.742: INFO: csi-mockplugin-0 started at 2022-11-25 17:57:30 +0000 UTC (0+4 container statuses recorded) Nov 25 18:09:57.742: INFO: Container busybox ready: false, restart count 5 Nov 25 18:09:57.742: INFO: Container csi-provisioner ready: true, restart count 5 Nov 25 18:09:57.742: INFO: Container driver-registrar ready: false, restart count 5 Nov 25 18:09:57.742: INFO: Container mock ready: false, restart count 5 Nov 25 18:09:57.742: INFO: test-hostpath-type-h2l7v started at 2022-11-25 17:57:37 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 18:09:57.742: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:06:07 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.742: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container hostpath ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 18:09:57.742: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 18:09:57.742: INFO: kube-proxy-bootstrap-e2e-minion-group-4mzt started at 2022-11-25 17:55:33 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container kube-proxy ready: true, restart count 5 Nov 25 18:09:57.742: INFO: netserver-1 started at 2022-11-25 18:02:24 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container webserver ready: true, restart count 3 Nov 25 18:09:57.742: INFO: pvc-volume-tester-8kwsf started at 2022-11-25 17:57:50 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container volume-tester ready: false, restart count 0 Nov 25 18:09:57.742: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:07:40 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.742: INFO: Container csi-attacher ready: true, restart count 4 Nov 25 18:09:57.742: INFO: Container csi-provisioner ready: true, restart count 4 Nov 25 18:09:57.742: INFO: Container csi-resizer ready: true, restart count 4 Nov 25 18:09:57.742: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 25 18:09:57.742: INFO: Container hostpath ready: true, restart count 4 Nov 25 18:09:57.742: INFO: Container liveness-probe ready: true, restart count 4 Nov 25 18:09:57.742: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 25 18:09:57.742: INFO: coredns-6d97d5ddb-mvdlj started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container coredns ready: false, restart count 6 Nov 25 18:09:57.742: INFO: affinity-lb-transition-kkfkl started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container affinity-lb-transition ready: true, restart count 1 Nov 25 18:09:57.742: INFO: kube-dns-autoscaler-5f6455f985-r2p5h started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container autoscaler ready: false, restart count 5 Nov 25 18:09:57.742: INFO: test-hostpath-type-9n5dc started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container host-path-sh-testing ready: false, restart count 0 Nov 25 18:09:57.742: INFO: net-tiers-svc-7xgzq started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container netexec ready: false, restart count 4 Nov 25 18:09:57.742: INFO: test-hostpath-type-h4n46 started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 18:09:57.742: INFO: csi-hostpathplugin-0 started at 2022-11-25 17:57:32 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.742: INFO: Container csi-attacher ready: true, restart count 5 Nov 25 18:09:57.742: INFO: Container csi-provisioner ready: true, restart count 5 Nov 25 18:09:57.742: INFO: Container csi-resizer ready: true, restart count 5 Nov 25 18:09:57.742: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 25 18:09:57.742: INFO: Container hostpath ready: true, restart count 5 Nov 25 18:09:57.742: INFO: Container liveness-probe ready: true, restart count 5 Nov 25 18:09:57.742: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 25 18:09:57.742: INFO: hostpath-symlink-prep-provisioning-7857 started at 2022-11-25 17:57:37 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:57.742: INFO: Container init-volume-provisioning-7857 ready: false, restart count 0 Nov 25 18:09:57.742: INFO: csi-hostpathplugin-0 started at 2022-11-25 17:59:09 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.742: INFO: Container csi-attacher ready: false, restart count 4 Nov 25 18:09:57.742: INFO: Container csi-provisioner ready: false, restart count 4 Nov 25 18:09:57.742: INFO: Container csi-resizer ready: false, restart count 4 Nov 25 18:09:57.742: INFO: Container csi-snapshotter ready: false, restart count 4 Nov 25 18:09:57.742: INFO: Container hostpath ready: false, restart count 4 Nov 25 18:09:57.742: INFO: Container liveness-probe ready: false, restart count 4 Nov 25 18:09:57.742: INFO: Container node-driver-registrar ready: false, restart count 4 Nov 25 18:09:57.742: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:07:42 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:57.742: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 18:09:57.742: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 18:09:57.742: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 18:09:57.742: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 18:09:57.742: INFO: Container hostpath ready: true, restart count 0 Nov 25 18:09:57.742: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 18:09:57.742: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 18:09:58.196: INFO: Latency metrics for node bootstrap-e2e-minion-group-4mzt Nov 25 18:09:58.196: INFO: Logging node info for node bootstrap-e2e-minion-group-n7kw Nov 25 18:09:58.324: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-n7kw 7d1d07a4-95bb-4dd3-9e9f-ddfa4fa14b70 6594 0 2022-11-25 17:55:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-n7kw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-n7kw topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-3103":"bootstrap-e2e-minion-group-n7kw","csi-hostpath-provisioning-5642":"bootstrap-e2e-minion-group-n7kw"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 17:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 17:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:03:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 18:05:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 18:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-minion-group-n7kw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 18:06:34 +0000 UTC,LastTransitionTime:2022-11-25 17:55:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 18:06:34 +0000 UTC,LastTransitionTime:2022-11-25 17:55:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 18:06:34 +0000 UTC,LastTransitionTime:2022-11-25 17:55:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 18:06:34 +0000 UTC,LastTransitionTime:2022-11-25 17:55:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.46.68,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-n7kw.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-n7kw.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:965ed4b1243051174426c5c2fe243ef2,SystemUUID:965ed4b1-2430-5117-4426-c5c2fe243ef2,BootID:ff2bb7b8-8a99-4325-b6b7-5f7a4db4207d,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-3103^812d9491-6ceb-11ed-9703-a690211c5cab],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-3103^812d9491-6ceb-11ed-9703-a690211c5cab,DevicePath:,},},Config:nil,},} Nov 25 18:09:58.324: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n7kw Nov 25 18:09:58.449: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n7kw Nov 25 18:09:58.716: INFO: csi-hostpathplugin-0 started at 2022-11-25 17:59:48 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:58.716: INFO: Container csi-attacher ready: true, restart count 3 Nov 25 18:09:58.717: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 18:09:58.717: INFO: Container csi-resizer ready: true, restart count 3 Nov 25 18:09:58.717: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 25 18:09:58.717: INFO: Container hostpath ready: true, restart count 3 Nov 25 18:09:58.717: INFO: Container liveness-probe ready: true, restart count 3 Nov 25 18:09:58.717: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 25 18:09:58.717: INFO: coredns-6d97d5ddb-rj9gr started at 2022-11-25 17:55:52 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container coredns ready: false, restart count 6 Nov 25 18:09:58.717: INFO: hostexec-bootstrap-e2e-minion-group-n7kw-g27lw started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container agnhost-container ready: true, restart count 3 Nov 25 18:09:58.717: INFO: metrics-server-v0.5.2-867b8754b9-kfmzh started at 2022-11-25 17:56:08 +0000 UTC (0+2 container statuses recorded) Nov 25 18:09:58.717: INFO: Container metrics-server ready: false, restart count 5 Nov 25 18:09:58.717: INFO: Container metrics-server-nanny ready: false, restart count 6 Nov 25 18:09:58.717: INFO: pod-subpath-test-inlinevolume-56rr started at 2022-11-25 17:57:35 +0000 UTC (1+2 container statuses recorded) Nov 25 18:09:58.717: INFO: Init container init-volume-inlinevolume-56rr ready: true, restart count 4 Nov 25 18:09:58.717: INFO: Container test-container-subpath-inlinevolume-56rr ready: true, restart count 6 Nov 25 18:09:58.717: INFO: Container test-container-volume-inlinevolume-56rr ready: true, restart count 6 Nov 25 18:09:58.717: INFO: pod-subpath-test-preprovisionedpv-w69n started at 2022-11-25 17:57:40 +0000 UTC (1+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Init container init-volume-preprovisionedpv-w69n ready: true, restart count 0 Nov 25 18:09:58.717: INFO: Container test-container-subpath-preprovisionedpv-w69n ready: false, restart count 0 Nov 25 18:09:58.717: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:03:44 +0000 UTC (0+7 container statuses recorded) Nov 25 18:09:58.717: INFO: Container csi-attacher ready: true, restart count 4 Nov 25 18:09:58.717: INFO: Container csi-provisioner ready: true, restart count 4 Nov 25 18:09:58.717: INFO: Container csi-resizer ready: true, restart count 4 Nov 25 18:09:58.717: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 25 18:09:58.717: INFO: Container hostpath ready: true, restart count 4 Nov 25 18:09:58.717: INFO: Container liveness-probe ready: true, restart count 4 Nov 25 18:09:58.717: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 25 18:09:58.717: INFO: hostexec-bootstrap-e2e-minion-group-n7kw-hgcqn started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container agnhost-container ready: false, restart count 4 Nov 25 18:09:58.717: INFO: affinity-lb-transition-wj98m started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container affinity-lb-transition ready: true, restart count 5 Nov 25 18:09:58.717: INFO: pod-secrets-e6cecd2b-b99c-4514-962e-4a2a65f920be started at 2022-11-25 17:59:22 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container creates-volume-test ready: false, restart count 0 Nov 25 18:09:58.717: INFO: pod-subpath-test-preprovisionedpv-p82l started at 2022-11-25 17:57:52 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container test-container-subpath-preprovisionedpv-p82l ready: false, restart count 0 Nov 25 18:09:58.717: INFO: csi-mockplugin-0 started at 2022-11-25 18:01:50 +0000 UTC (0+4 container statuses recorded) Nov 25 18:09:58.717: INFO: Container busybox ready: false, restart count 3 Nov 25 18:09:58.717: INFO: Container csi-provisioner ready: false, restart count 4 Nov 25 18:09:58.717: INFO: Container driver-registrar ready: false, restart count 4 Nov 25 18:09:58.717: INFO: Container mock ready: false, restart count 4 Nov 25 18:09:58.717: INFO: konnectivity-agent-979vp started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 25 18:09:58.717: INFO: hostexec-bootstrap-e2e-minion-group-n7kw-k5cr2 started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container agnhost-container ready: true, restart count 5 Nov 25 18:09:58.717: INFO: netserver-2 started at 2022-11-25 18:02:24 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container webserver ready: false, restart count 5 Nov 25 18:09:58.717: INFO: kube-proxy-bootstrap-e2e-minion-group-n7kw started at 2022-11-25 17:55:32 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container kube-proxy ready: false, restart count 5 Nov 25 18:09:58.717: INFO: metadata-proxy-v0.1-mlww9 started at 2022-11-25 17:55:32 +0000 UTC (0+2 container statuses recorded) Nov 25 18:09:58.717: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 18:09:58.717: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 18:09:58.717: INFO: hostexec-bootstrap-e2e-minion-group-n7kw-4xfsp started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container agnhost-container ready: true, restart count 5 Nov 25 18:09:58.717: INFO: hostpath-injector started at 2022-11-25 18:03:48 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container hostpath-injector ready: false, restart count 0 Nov 25 18:09:58.717: INFO: pod-53b5be5e-7634-41e1-b41b-e7d8e44ebc0b started at 2022-11-25 17:57:31 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container write-pod ready: false, restart count 0 Nov 25 18:09:58.717: INFO: hostexec-bootstrap-e2e-minion-group-n7kw-5chqs started at 2022-11-25 17:57:48 +0000 UTC (0+1 container statuses recorded) Nov 25 18:09:58.717: INFO: Container agnhost-container ready: true, restart count 5 Nov 25 18:09:59.566: INFO: Latency metrics for node bootstrap-e2e-minion-group-n7kw [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-7698" for this suite. 11/25/22 18:09:59.566
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=NodePort$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc002cb6620, {0x75c6f7c, 0x9}, 0xc002d40cc0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc002cb6620, 0x7f123c2fbd58?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc002cb6620, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000ffa000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 There were additional failures detected after the initial failure: [FAILED] Nov 25 18:03:59.550: failed to list events in namespace "esipp-6370": Get "https://35.233.152.153/api/v1/namespaces/esipp-6370/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:03:59.590: Couldn't delete ns: "esipp-6370": Delete "https://35.233.152.153/api/v1/namespaces/esipp-6370": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/esipp-6370", Err:(*net.OpError)(0xc003a352c0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:02:16.128 Nov 25 18:02:16.128: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 18:02:16.13 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:02:16.408 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:02:16.497 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=NodePort test/e2e/network/loadbalancer.go:1314 STEP: creating a service esipp-6370/external-local-nodeport with type=NodePort and ExternalTrafficPolicy=Local 11/25/22 18:02:16.717 STEP: creating a pod to be part of the service external-local-nodeport 11/25/22 18:02:16.804 Nov 25 18:02:16.859: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 18:02:16.917: INFO: Found all 1 pods Nov 25 18:02:16.917: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-nodeport-lkwl7] Nov 25 18:02:16.917: INFO: Waiting up to 2m0s for pod "external-local-nodeport-lkwl7" in namespace "esipp-6370" to be "running and ready" Nov 25 18:02:16.991: INFO: Pod "external-local-nodeport-lkwl7": Phase="Pending", Reason="", readiness=false. Elapsed: 73.920065ms Nov 25 18:02:16.991: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-lkwl7' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:02:19.129: INFO: Pod "external-local-nodeport-lkwl7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212061491s Nov 25 18:02:19.129: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-lkwl7' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:02:21.051: INFO: Pod "external-local-nodeport-lkwl7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133719112s Nov 25 18:02:21.051: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-lkwl7' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:02:23.056: INFO: Pod "external-local-nodeport-lkwl7": Phase="Running", Reason="", readiness=true. Elapsed: 6.138372569s Nov 25 18:02:23.056: INFO: Pod "external-local-nodeport-lkwl7" satisfied condition "running and ready" Nov 25 18:02:23.056: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-nodeport-lkwl7] STEP: Performing setup for networking test in namespace esipp-6370 11/25/22 18:02:24.184 STEP: creating a selector 11/25/22 18:02:24.184 STEP: Creating the service pods in kubernetes 11/25/22 18:02:24.184 Nov 25 18:02:24.184: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 18:02:24.686: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-6370" to be "running and ready" Nov 25 18:02:24.829: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 143.611472ms Nov 25 18:02:24.829: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:02:26.886: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200599449s Nov 25 18:02:26.886: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:02:28.922: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236466405s Nov 25 18:02:28.922: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:02:30.881: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.195224427s Nov 25 18:02:30.881: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:02:32.881: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.195682296s Nov 25 18:02:32.881: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:02:34.880: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.194592519s Nov 25 18:02:34.880: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:02:36.894: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.208709454s Nov 25 18:02:36.894: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 25 18:02:36.894: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 25 18:02:36.970: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-6370" to be "running and ready" Nov 25 18:02:37.069: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 98.3677ms Nov 25 18:02:37.069: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 18:02:39.139: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.16858151s Nov 25 18:02:39.139: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 18:02:41.137: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.166192221s Nov 25 18:02:41.137: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 18:02:43.132: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.161590974s Nov 25 18:02:43.132: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 18:02:45.139: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 8.168990348s Nov 25 18:02:45.139: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 25 18:02:45.139: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 25 18:02:45.205: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-6370" to be "running and ready" Nov 25 18:02:45.251: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 45.869888ms Nov 25 18:02:45.251: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:02:47.310: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2.10455056s Nov 25 18:02:47.310: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:02:49.304: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 4.09890086s Nov 25 18:02:49.304: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:02:51.321: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 6.115515497s Nov 25 18:02:51.321: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:02:53.353: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 8.147734482s Nov 25 18:02:53.353: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:02:55.312: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 10.106413363s Nov 25 18:02:55.312: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:02:57.372: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 12.166814298s Nov 25 18:02:57.372: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:02:59.312: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 14.106379544s Nov 25 18:02:59.312: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:01.304: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 16.098923381s Nov 25 18:03:01.304: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:03.334: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 18.128285361s Nov 25 18:03:03.334: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:05.299: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 20.093112739s Nov 25 18:03:05.299: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:07.313: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 22.107652746s Nov 25 18:03:07.313: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:09.339: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 24.133829911s Nov 25 18:03:09.339: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:11.319: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 26.113137979s Nov 25 18:03:11.319: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:13.352: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 28.14666559s Nov 25 18:03:13.352: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:15.318: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 30.112964495s Nov 25 18:03:15.318: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:17.344: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 32.138666376s Nov 25 18:03:17.344: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:19.309: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 34.103903519s Nov 25 18:03:19.309: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:21.309: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 36.103755915s Nov 25 18:03:21.309: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:23.344: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 38.138964459s Nov 25 18:03:23.344: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:25.334: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 40.128544064s Nov 25 18:03:25.334: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:27.311: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 42.105446428s Nov 25 18:03:27.311: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:29.338: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 44.132780328s Nov 25 18:03:29.338: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:31.306: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 46.10057618s Nov 25 18:03:31.306: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:33.312: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 48.106632143s Nov 25 18:03:33.312: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:35.293: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 50.087267946s Nov 25 18:03:35.293: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:37.294: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 52.089053297s Nov 25 18:03:37.294: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:39.300: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 54.094796566s Nov 25 18:03:39.300: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:41.298: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 56.092909196s Nov 25 18:03:41.298: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:43.316: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 58.110740659s Nov 25 18:03:43.316: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:45.308: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.10236523s Nov 25 18:03:45.308: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:47.306: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.100684113s Nov 25 18:03:47.306: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:49.309: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.103758612s Nov 25 18:03:49.309: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:51.302: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.096791542s Nov 25 18:03:51.302: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:53.294: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.088344922s Nov 25 18:03:53.294: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:55.294: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.088215231s Nov 25 18:03:55.294: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:57.293: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.087594791s Nov 25 18:03:57.293: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 18:03:59.292: INFO: Encountered non-retryable error while getting pod esipp-6370/netserver-2: Get "https://35.233.152.153/api/v1/namespaces/esipp-6370/pods/netserver-2": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:03:59.292: INFO: Unexpected error: <*fmt.wrapError | 0xc0040322a0>: { msg: "error while waiting for pod esipp-6370/netserver-2 to be running and ready: Get \"https://35.233.152.153/api/v1/namespaces/esipp-6370/pods/netserver-2\": dial tcp 35.233.152.153:443: connect: connection refused", err: <*url.Error | 0xc002094b40>{ Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/esipp-6370/pods/netserver-2", Err: <*net.OpError | 0xc0027da7d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0037655f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004032260>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 25 18:03:59.292: FAIL: error while waiting for pod esipp-6370/netserver-2 to be running and ready: Get "https://35.233.152.153/api/v1/namespaces/esipp-6370/pods/netserver-2": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc002cb6620, {0x75c6f7c, 0x9}, 0xc002d40cc0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc002cb6620, 0x7f123c2fbd58?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc002cb6620, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000ffa000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 Nov 25 18:03:59.332: INFO: Unexpected error: <*url.Error | 0xc0037911a0>: { Op: "Delete", URL: "https://35.233.152.153/api/v1/namespaces/esipp-6370/services/external-local-nodeport", Err: <*net.OpError | 0xc003a34a50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002094f00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00102c500>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:03:59.332: FAIL: Delete "https://35.233.152.153/api/v1/namespaces/esipp-6370/services/external-local-nodeport": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.4.1() test/e2e/network/loadbalancer.go:1323 +0xe7 panic({0x70eb7e0, 0xc000ba92d0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00095a1a0, 0xd0}, {0xc00057f7c0?, 0xc00095a1a0?, 0xc00057f7e8?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3f20, 0xc0040322a0}, {0x0?, 0xc002864b40?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc002cb6620, {0x75c6f7c, 0x9}, 0xc002d40cc0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc002cb6620, 0x7f123c2fbd58?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc002cb6620, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000ffa000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 18:03:59.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 18:03:59.371: INFO: Output of kubectl describe svc: Nov 25 18:03:59.371: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=esipp-6370 describe svc --namespace=esipp-6370' Nov 25 18:03:59.510: INFO: rc: 1 Nov 25 18:03:59.510: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:03:59.51 STEP: Collecting events from namespace "esipp-6370". 11/25/22 18:03:59.51 Nov 25 18:03:59.550: INFO: Unexpected error: failed to list events in namespace "esipp-6370": <*url.Error | 0xc003791b90>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/esipp-6370/events", Err: <*net.OpError | 0xc003a34eb0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003791b60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00102cde0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:03:59.550: FAIL: failed to list events in namespace "esipp-6370": Get "https://35.233.152.153/api/v1/namespaces/esipp-6370/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00057a5c0, {0xc002864b40, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0040a0680}, {0xc002864b40, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00057a650?, {0xc002864b40?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000ffa000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001510b80?, 0xc003f50f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001510b80?, 0x7fadfa0?}, {0xae73300?, 0xc003f50f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-6370" for this suite. 11/25/22 18:03:59.55 Nov 25 18:03:59.590: FAIL: Couldn't delete ns: "esipp-6370": Delete "https://35.233.152.153/api/v1/namespaces/esipp-6370": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/esipp-6370", Err:(*net.OpError)(0xc003a352c0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000ffa000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001510af0?, 0xc003f50fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001510af0?, 0x0?}, {0xae73300?, 0x5?, 0xc00021d770?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sTCP\sservice\s\[Slow\]$'
test/e2e/framework/service/util.go:48 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc0035f6890, 0xd}, 0x77d8, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:48 +0x265 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 +0x11cefrom junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:02:16.03 Nov 25 18:02:16.030: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 18:02:16.033 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:02:16.34 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:02:16.438 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to change the type and ports of a TCP service [Slow] test/e2e/network/loadbalancer.go:77 Nov 25 18:02:16.669: INFO: namespace for TCP test: loadbalancers-3301 STEP: creating a TCP service mutability-test with type=ClusterIP in namespace loadbalancers-3301 11/25/22 18:02:16.735 Nov 25 18:02:16.806: INFO: service port TCP: 80 STEP: creating a pod to be part of the TCP service mutability-test 11/25/22 18:02:16.806 Nov 25 18:02:16.859: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 18:02:16.920: INFO: Found all 1 pods Nov 25 18:02:16.920: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [mutability-test-w9qww] Nov 25 18:02:16.920: INFO: Waiting up to 2m0s for pod "mutability-test-w9qww" in namespace "loadbalancers-3301" to be "running and ready" Nov 25 18:02:16.991: INFO: Pod "mutability-test-w9qww": Phase="Pending", Reason="", readiness=false. Elapsed: 71.465106ms Nov 25 18:02:16.991: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-w9qww' on 'bootstrap-e2e-minion-group-n7kw' to be 'Running' but was 'Pending' Nov 25 18:02:19.129: INFO: Pod "mutability-test-w9qww": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209654632s Nov 25 18:02:19.129: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-w9qww' on 'bootstrap-e2e-minion-group-n7kw' to be 'Running' but was 'Pending' Nov 25 18:02:21.051: INFO: Pod "mutability-test-w9qww": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131639434s Nov 25 18:02:21.051: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-w9qww' on 'bootstrap-e2e-minion-group-n7kw' to be 'Running' but was 'Pending' Nov 25 18:02:23.056: INFO: Pod "mutability-test-w9qww": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135949331s Nov 25 18:02:23.056: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-w9qww' on 'bootstrap-e2e-minion-group-n7kw' to be 'Running' but was 'Pending' Nov 25 18:02:25.048: INFO: Pod "mutability-test-w9qww": Phase="Running", Reason="", readiness=false. Elapsed: 8.127834764s Nov 25 18:02:25.048: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-w9qww' on 'bootstrap-e2e-minion-group-n7kw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC }] Nov 25 18:02:27.046: INFO: Pod "mutability-test-w9qww": Phase="Running", Reason="", readiness=false. Elapsed: 10.126300799s Nov 25 18:02:27.046: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-w9qww' on 'bootstrap-e2e-minion-group-n7kw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC }] Nov 25 18:02:29.065: INFO: Pod "mutability-test-w9qww": Phase="Running", Reason="", readiness=false. Elapsed: 12.145111571s Nov 25 18:02:29.065: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-w9qww' on 'bootstrap-e2e-minion-group-n7kw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC }] Nov 25 18:02:31.234: INFO: Pod "mutability-test-w9qww": Phase="Running", Reason="", readiness=true. Elapsed: 14.314181648s Nov 25 18:02:31.234: INFO: Pod "mutability-test-w9qww" satisfied condition "running and ready" Nov 25 18:02:31.234: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [mutability-test-w9qww] STEP: changing the TCP service to type=NodePort 11/25/22 18:02:31.234 Nov 25 18:02:31.377: INFO: TCP node port: 30679 STEP: hitting the TCP service's NodePort 11/25/22 18:02:31.377 Nov 25 18:02:31.377: INFO: Poking "http://34.82.210.102:30679/echo?msg=hello" Nov 25 18:02:31.418: INFO: Poke("http://34.82.210.102:30679/echo?msg=hello"): Get "http://34.82.210.102:30679/echo?msg=hello": dial tcp 34.82.210.102:30679: connect: connection refused Nov 25 18:02:33.419: INFO: Poking "http://34.82.210.102:30679/echo?msg=hello" Nov 25 18:02:33.459: INFO: Poke("http://34.82.210.102:30679/echo?msg=hello"): Get "http://34.82.210.102:30679/echo?msg=hello": dial tcp 34.82.210.102:30679: connect: connection refused Nov 25 18:02:35.419: INFO: Poking "http://34.82.210.102:30679/echo?msg=hello" Nov 25 18:02:35.500: INFO: Poke("http://34.82.210.102:30679/echo?msg=hello"): success STEP: creating a static load balancer IP 11/25/22 18:02:35.5 Nov 25 18:02:37.638: INFO: Allocated static load balancer IP: 34.168.230.125 STEP: changing the TCP service to type=LoadBalancer 11/25/22 18:02:37.638 STEP: waiting for the TCP service to have a load balancer 11/25/22 18:02:37.802 Nov 25 18:02:37.802: INFO: Waiting up to 15m0s for service "mutability-test" to have a LoadBalancer Nov 25 18:03:15.951: INFO: TCP load balancer: 34.168.230.125 STEP: demoting the static IP to ephemeral 11/25/22 18:03:15.951 STEP: hitting the TCP service's NodePort 11/25/22 18:03:17.678 Nov 25 18:03:17.678: INFO: Poking "http://34.82.210.102:30679/echo?msg=hello" Nov 25 18:03:17.760: INFO: Poke("http://34.82.210.102:30679/echo?msg=hello"): success STEP: hitting the TCP service's LoadBalancer 11/25/22 18:03:17.76 Nov 25 18:03:17.760: INFO: Poking "http://34.168.230.125:80/echo?msg=hello" Nov 25 18:03:27.761: INFO: Poke("http://34.168.230.125:80/echo?msg=hello"): Get "http://34.168.230.125:80/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:03:29.762: INFO: Poking "http://34.168.230.125:80/echo?msg=hello" Nov 25 18:03:29.843: INFO: Poke("http://34.168.230.125:80/echo?msg=hello"): success STEP: changing the TCP service's NodePort 11/25/22 18:03:29.843 Nov 25 18:03:30.022: INFO: TCP node port: 30680 STEP: hitting the TCP service's new NodePort 11/25/22 18:03:30.022 Nov 25 18:03:30.022: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:30.063: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:32.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:32.102: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:34.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:34.102: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:36.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:36.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:38.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:38.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:40.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:40.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:42.066: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:42.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:44.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:44.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:46.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:46.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:48.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:48.102: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:50.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:50.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:52.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:52.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:54.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:54.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:56.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:56.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:03:58.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:03:58.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:00.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:00.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:02.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:02.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:04.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:04.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:06.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:06.102: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:08.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:08.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:10.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:10.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:12.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:12.102: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:14.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:14.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:16.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:16.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:18.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:18.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:20.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:20.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:22.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:22.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:24.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:24.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:26.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:26.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:28.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:28.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:30.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:30.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:32.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:32.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:34.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:34.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:36.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:36.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:38.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:38.102: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:40.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:40.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:42.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:42.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:44.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:44.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:46.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:46.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:48.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:48.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:50.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:50.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:52.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:52.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:54.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:54.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:56.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:56.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:04:58.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:04:58.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:05:00.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:00.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:05:02.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:02.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:05:04.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:04.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:05:06.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:06.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:05:08.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:08.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:05:10.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:10.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:05:12.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:12.102: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:05:14.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:14.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:05:16.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:16.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: connection refused Nov 25 18:05:18.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:18.108: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:20.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:20.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:22.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:22.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:24.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:24.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:26.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:26.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:28.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:28.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:30.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:30.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:32.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:32.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:34.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:34.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:36.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:36.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:38.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:38.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:40.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:40.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:42.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:42.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:44.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:44.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:46.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:46.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:48.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:48.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:50.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:50.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:52.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:52.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:54.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:54.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:56.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:56.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:05:58.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:05:58.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:00.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:00.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:02.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:02.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:04.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:04.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:06.065: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:06.106: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:08.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:08.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:10.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:10.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:12.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:12.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:14.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:14.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:16.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:16.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:18.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:18.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:20.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:20.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:22.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:22.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:24.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:24.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:26.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:26.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:28.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:28.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:30.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:31.117: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:32.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:32.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:34.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:34.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:36.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:36.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:38.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:38.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:40.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:40.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:42.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:42.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:44.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:44.106: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:46.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:46.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:48.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:48.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:50.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:50.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:52.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:52.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:54.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:54.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:56.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:56.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:06:58.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:06:58.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:00.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:00.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:02.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:02.106: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:04.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:04.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:06.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:06.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:08.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:08.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:10.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:10.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:12.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:12.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:14.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:14.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:16.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:16.103: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 5m0.576s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 3m46.584s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 1072 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000ccbbc0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2d?, 0xc000e99c20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00132bc80?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc0035f6890, 0xd}, 0x77d8, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00320e300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:07:18.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:18.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:20.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:20.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:22.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:22.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:24.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:24.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:26.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:26.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:28.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:28.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:30.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:30.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:32.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:32.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:34.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:34.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:36.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:36.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 5m20.578s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m20.002s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 4m6.586s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 1072 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000ccbbc0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2d?, 0xc000e99c20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00132bc80?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc0035f6890, 0xd}, 0x77d8, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00320e300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:07:38.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:38.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:40.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:40.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:42.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:42.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:44.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:44.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:46.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:46.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:48.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:48.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:50.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:50.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:52.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:52.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:54.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:54.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:07:56.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:56.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 5m40.585s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m40.009s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 4m26.593s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 1072 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000ccbbc0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2d?, 0xc000e99c20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00132bc80?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc0035f6890, 0xd}, 0x77d8, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00320e300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:07:58.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:07:58.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:00.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:00.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:02.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:02.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:04.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:04.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:06.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:06.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:08.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:08.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:10.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:10.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:12.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:12.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:14.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:14.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:16.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:16.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 6m0.586s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 6m0.011s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 4m46.594s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 1072 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000ccbbc0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2d?, 0xc000e99c20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00132bc80?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc0035f6890, 0xd}, 0x77d8, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00320e300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:08:18.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:18.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:20.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:20.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:22.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:22.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:24.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:24.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:26.063: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:26.104: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:28.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:28.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:30.064: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:30.105: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:30.105: INFO: Poking "http://34.82.210.102:30680/echo?msg=hello" Nov 25 18:08:30.146: INFO: Poke("http://34.82.210.102:30680/echo?msg=hello"): Get "http://34.82.210.102:30680/echo?msg=hello": dial tcp 34.82.210.102:30680: connect: no route to host Nov 25 18:08:30.147: FAIL: Could not reach HTTP service through 34.82.210.102:30680 after 5m0s Full Stack Trace k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc0035f6890, 0xd}, 0x77d8, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:48 +0x265 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 +0x11ce [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 18:08:30.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 18:08:30.329: INFO: Output of kubectl describe svc: Nov 25 18:08:30.329: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-3301 describe svc --namespace=loadbalancers-3301' Nov 25 18:08:30.728: INFO: stderr: "" Nov 25 18:08:30.728: INFO: stdout: "Name: mutability-test\nNamespace: loadbalancers-3301\nLabels: testid=mutability-test-bc896155-2d8c-4835-aa93-d7e61f55cd2f\nAnnotations: <none>\nSelector: testid=mutability-test-bc896155-2d8c-4835-aa93-d7e61f55cd2f\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.170.102\nIPs: 10.0.170.102\nIP: 34.168.230.125\nLoadBalancer Ingress: 34.168.230.125\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nNodePort: <unset> 30680/TCP\nEndpoints: 10.64.1.87:80\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Type 5m53s service-controller NodePort -> LoadBalancer\n Normal EnsuringLoadBalancer 5m (x2 over 5m43s) service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 4m56s (x2 over 5m16s) service-controller Ensured load balancer\n Normal EnsuringLoadBalancer 2m26s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 2m22s service-controller Ensured load balancer\n" Nov 25 18:08:30.728: INFO: Name: mutability-test Namespace: loadbalancers-3301 Labels: testid=mutability-test-bc896155-2d8c-4835-aa93-d7e61f55cd2f Annotations: <none> Selector: testid=mutability-test-bc896155-2d8c-4835-aa93-d7e61f55cd2f Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.170.102 IPs: 10.0.170.102 IP: 34.168.230.125 LoadBalancer Ingress: 34.168.230.125 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30680/TCP Endpoints: 10.64.1.87:80 Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Type 5m53s service-controller NodePort -> LoadBalancer Normal EnsuringLoadBalancer 5m (x2 over 5m43s) service-controller Ensuring load balancer Normal EnsuredLoadBalancer 4m56s (x2 over 5m16s) service-controller Ensured load balancer Normal EnsuringLoadBalancer 2m26s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 2m22s service-controller Ensured load balancer [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:08:30.728 STEP: Collecting events from namespace "loadbalancers-3301". 11/25/22 18:08:30.729 STEP: Found 13 events. 11/25/22 18:08:30.794 Nov 25 18:08:30.795: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for mutability-test-w9qww: { } Scheduled: Successfully assigned loadbalancers-3301/mutability-test-w9qww to bootstrap-e2e-minion-group-n7kw Nov 25 18:08:30.795: INFO: At 2022-11-25 18:02:16 +0000 UTC - event for mutability-test: {replication-controller } SuccessfulCreate: Created pod: mutability-test-w9qww Nov 25 18:08:30.795: INFO: At 2022-11-25 18:02:20 +0000 UTC - event for mutability-test-w9qww: {kubelet bootstrap-e2e-minion-group-n7kw} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 18:08:30.795: INFO: At 2022-11-25 18:02:20 +0000 UTC - event for mutability-test-w9qww: {kubelet bootstrap-e2e-minion-group-n7kw} Created: Created container netexec Nov 25 18:08:30.795: INFO: At 2022-11-25 18:02:20 +0000 UTC - event for mutability-test-w9qww: {kubelet bootstrap-e2e-minion-group-n7kw} Started: Started container netexec Nov 25 18:08:30.795: INFO: At 2022-11-25 18:02:37 +0000 UTC - event for mutability-test: {service-controller } Type: NodePort -> LoadBalancer Nov 25 18:08:30.795: INFO: At 2022-11-25 18:02:47 +0000 UTC - event for mutability-test: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 18:08:30.795: INFO: At 2022-11-25 18:03:14 +0000 UTC - event for mutability-test: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 25 18:08:30.795: INFO: At 2022-11-25 18:05:11 +0000 UTC - event for mutability-test-w9qww: {kubelet bootstrap-e2e-minion-group-n7kw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 18:08:30.795: INFO: At 2022-11-25 18:05:11 +0000 UTC - event for mutability-test-w9qww: {kubelet bootstrap-e2e-minion-group-n7kw} Killing: Stopping container netexec Nov 25 18:08:30.795: INFO: At 2022-11-25 18:05:15 +0000 UTC - event for mutability-test-w9qww: {kubelet bootstrap-e2e-minion-group-n7kw} BackOff: Back-off restarting failed container netexec in pod mutability-test-w9qww_loadbalancers-3301(4cd1e85b-de2e-45d4-aa2c-8f20c2b95038) Nov 25 18:08:30.795: INFO: At 2022-11-25 18:06:04 +0000 UTC - event for mutability-test: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 18:08:30.795: INFO: At 2022-11-25 18:06:08 +0000 UTC - event for mutability-test: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 25 18:08:30.854: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 18:08:30.854: INFO: mutability-test-w9qww bootstrap-e2e-minion-group-n7kw Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:05:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:05:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:02:16 +0000 UTC }] Nov 25 18:08:30.854: INFO: Nov 25 18:08:30.949: INFO: Unable to fetch loadbalancers-3301/mutability-test-w9qww/netexec logs: an error on the server ("unknown") has prevented the request from succeeding (get pods mutability-test-w9qww) Nov 25 18:08:31.025: INFO: Logging node info for node bootstrap-e2e-master Nov 25 18:08:31.076: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master eb94e66b-ae91-494a-9e40-bf2a53869582 6128 0 2022-11-25 17:55:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 17:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 17:55:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 17:55:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 18:06:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 18:06:24 +0000 UTC,LastTransitionTime:2022-11-25 17:55:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 18:06:24 +0000 UTC,LastTransitionTime:2022-11-25 17:55:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 18:06:24 +0000 UTC,LastTransitionTime:2022-11-25 17:55:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 18:06:24 +0000 UTC,LastTransitionTime:2022-11-25 17:55:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.233.152.153,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:58899ad1ba7a6711fcb2fb23af2e2912,SystemUUID:58899ad1-ba7a-6711-fcb2-fb23af2e2912,BootID:690b7c55-8447-49d5-8a09-10c87046c77c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 18:08:31.077: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 18:08:31.133: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 18:08:31.213: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 18:08:31.213: INFO: Logging node info for node bootstrap-e2e-minion-group-11zh Nov 25 18:08:31.270: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-11zh 51498931-fa93-403b-99dc-c4f0f6b81384 7932 0 2022-11-25 17:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-11zh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-11zh topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2374":"bootstrap-e2e-minion-group-11zh","csi-hostpath-multivolume-6564":"bootstrap-e2e-minion-group-11zh","csi-hostpath-provisioning-121":"bootstrap-e2e-minion-group-11zh","csi-hostpath-provisioning-8377":"bootstrap-e2e-minion-group-11zh","csi-mock-csi-mock-volumes-5090":"bootstrap-e2e-minion-group-11zh","csi-mock-csi-mock-volumes-729":"bootstrap-e2e-minion-group-11zh"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 17:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 17:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 18:05:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 18:07:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 18:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-minion-group-11zh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 18:05:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 18:07:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 18:07:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 18:07:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 18:07:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.210.102,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-11zh.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-11zh.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aa71c72a5648d3deaeffa3d5a75ed1ea,SystemUUID:aa71c72a-5648-d3de-aeff-a3d5a75ed1ea,BootID:4402c9e1-cf2e-4e88-9a9b-3152017f4dc0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8377^482d555f-6ceb-11ed-bfea-9aff4ac17fc3],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-8377^482d555f-6ceb-11ed-bfea-9aff4ac17fc3,DevicePath:,},},Config:nil,},} Nov 25 18:08:31.271: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-11zh Nov 25 18:08:31.323: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-11zh Nov 25 18:08:31.388: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-11zh: error trying to reach service: No agent available Nov 25 18:08:31.388: INFO: Logging node info for node bootstrap-e2e-minion-group-4mzt Nov 25 18:08:31.443: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4mzt 22649193-0b27-417f-8621-b5ea24d332ed 7911 0 2022-11-25 17:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4mzt kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4mzt topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-1266":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-multivolume-9968":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-provisioning-4943":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-provisioning-7709":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-provisioning-924":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-volumemode-4509":"bootstrap-e2e-minion-group-4mzt"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 17:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 17:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 18:05:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 18:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2022-11-25 18:08:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-minion-group-4mzt,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 18:05:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 18:08:20 +0000 UTC,LastTransitionTime:2022-11-25 17:55:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 18:08:20 +0000 UTC,LastTransitionTime:2022-11-25 17:55:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 18:08:20 +0000 UTC,LastTransitionTime:2022-11-25 17:55:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 18:08:20 +0000 UTC,LastTransitionTime:2022-11-25 17:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.242.250,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4mzt.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4mzt.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1e3433884817dce77b68706e93091a61,SystemUUID:1e343388-4817-dce7-7b68-706e93091a61,BootID:17688f51-d17a-4208-ac49-46ee5ba23c29,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-8912^117d612e-6cec-11ed-99c9-225cdbe59089 kubernetes.io/csi/csi-hostpath-multivolume-9^ae0cfa70-6cea-11ed-b2be-0675efa82cc9 kubernetes.io/csi/csi-hostpath-provisioning-7709^a8b285e7-6cea-11ed-bbb6-5acdec970e3c kubernetes.io/csi/csi-mock-csi-mock-volumes-5248^abd0f9bb-6cea-11ed-bb76-0e817f325504],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-9^ae0cfa70-6cea-11ed-b2be-0675efa82cc9,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7709^a8b285e7-6cea-11ed-bbb6-5acdec970e3c,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8912^117d612e-6cec-11ed-99c9-225cdbe59089,DevicePath:,},},Config:nil,},} Nov 25 18:08:31.443: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-4mzt Nov 25 18:08:31.505: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4mzt Nov 25 18:08:31.571: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-4mzt: error trying to reach service: No agent available Nov 25 18:08:31.571: INFO: Logging node info for node bootstrap-e2e-minion-group-n7kw Nov 25 18:08:31.623: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-n7kw 7d1d07a4-95bb-4dd3-9e9f-ddfa4fa14b70 6594 0 2022-11-25 17:55:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-n7kw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-n7kw topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-3103":"bootstrap-e2e-minion-group-n7kw","csi-hostpath-provisioning-5642":"bootstrap-e2e-minion-group-n7kw"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 17:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 17:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:03:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 18:05:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 18:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-minion-group-n7kw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 18:05:38 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 18:06:34 +0000 UTC,LastTransitionTime:2022-11-25 17:55:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 18:06:34 +0000 UTC,LastTransitionTime:2022-11-25 17:55:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 18:06:34 +0000 UTC,LastTransitionTime:2022-11-25 17:55:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 18:06:34 +0000 UTC,LastTransitionTime:2022-11-25 17:55:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.46.68,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-n7kw.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-n7kw.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:965ed4b1243051174426c5c2fe243ef2,SystemUUID:965ed4b1-2430-5117-4426-c5c2fe243ef2,BootID:ff2bb7b8-8a99-4325-b6b7-5f7a4db4207d,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-3103^812d9491-6ceb-11ed-9703-a690211c5cab],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-3103^812d9491-6ceb-11ed-9703-a690211c5cab,DevicePath:,},},Config:nil,},} Nov 25 18:08:31.624: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n7kw Nov 25 18:08:31.682: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n7kw Nov 25 18:08:31.749: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-n7kw: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-3301" for this suite. 11/25/22 18:08:31.75
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sUDP\sservice\s\[Slow\]$'
test/e2e/network/loadbalancer.go:314 k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:314 +0x2d7 There were additional failures detected after the initial failure: [FAILED] Nov 25 17:57:53.809: failed to list events in namespace "loadbalancers-8332": Get "https://35.233.152.153/api/v1/namespaces/loadbalancers-8332/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 17:57:53.850: Couldn't delete ns: "loadbalancers-8332": Delete "https://35.233.152.153/api/v1/namespaces/loadbalancers-8332": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/loadbalancers-8332", Err:(*net.OpError)(0xc00398c000)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 17:57:28.608 Nov 25 17:57:28.608: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 17:57:28.61 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 17:57:28.757 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 17:57:28.848 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to change the type and ports of a UDP service [Slow] test/e2e/network/loadbalancer.go:287 Nov 25 17:57:29.103: INFO: namespace for TCP test: loadbalancers-8332 STEP: creating a UDP service mutability-test with type=ClusterIP in namespace loadbalancers-8332 11/25/22 17:57:29.216 Nov 25 17:57:29.339: INFO: service port UDP: 80 STEP: creating a pod to be part of the UDP service mutability-test 11/25/22 17:57:29.339 Nov 25 17:57:29.415: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 17:57:29.475: INFO: Found all 1 pods Nov 25 17:57:29.475: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [mutability-test-vtnxz] Nov 25 17:57:29.475: INFO: Waiting up to 2m0s for pod "mutability-test-vtnxz" in namespace "loadbalancers-8332" to be "running and ready" Nov 25 17:57:29.547: INFO: Pod "mutability-test-vtnxz": Phase="Pending", Reason="", readiness=false. Elapsed: 72.227895ms Nov 25 17:57:29.547: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-vtnxz' on 'bootstrap-e2e-minion-group-4mzt' to be 'Running' but was 'Pending' Nov 25 17:57:31.589: INFO: Pod "mutability-test-vtnxz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114221135s Nov 25 17:57:31.589: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-vtnxz' on 'bootstrap-e2e-minion-group-4mzt' to be 'Running' but was 'Pending' Nov 25 17:57:33.590: INFO: Pod "mutability-test-vtnxz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114967638s Nov 25 17:57:33.590: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-vtnxz' on 'bootstrap-e2e-minion-group-4mzt' to be 'Running' but was 'Pending' Nov 25 17:57:35.590: INFO: Pod "mutability-test-vtnxz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115143313s Nov 25 17:57:35.590: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-vtnxz' on 'bootstrap-e2e-minion-group-4mzt' to be 'Running' but was 'Pending' Nov 25 17:57:37.591: INFO: Pod "mutability-test-vtnxz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115394036s Nov 25 17:57:37.591: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-vtnxz' on 'bootstrap-e2e-minion-group-4mzt' to be 'Running' but was 'Pending' Nov 25 17:57:39.590: INFO: Pod "mutability-test-vtnxz": Phase="Running", Reason="", readiness=false. Elapsed: 10.114595647s Nov 25 17:57:39.590: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-vtnxz' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:41.589: INFO: Pod "mutability-test-vtnxz": Phase="Running", Reason="", readiness=false. Elapsed: 12.113547292s Nov 25 17:57:41.589: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-vtnxz' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:43.591: INFO: Pod "mutability-test-vtnxz": Phase="Running", Reason="", readiness=false. Elapsed: 14.115939407s Nov 25 17:57:43.591: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-vtnxz' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:45.590: INFO: Pod "mutability-test-vtnxz": Phase="Running", Reason="", readiness=false. Elapsed: 16.115030056s Nov 25 17:57:45.590: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-vtnxz' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:47.591: INFO: Pod "mutability-test-vtnxz": Phase="Running", Reason="", readiness=false. Elapsed: 18.115566388s Nov 25 17:57:47.591: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-vtnxz' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:49.591: INFO: Pod "mutability-test-vtnxz": Phase="Running", Reason="", readiness=false. Elapsed: 20.115831582s Nov 25 17:57:49.591: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-vtnxz' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:51.591: INFO: Pod "mutability-test-vtnxz": Phase="Running", Reason="", readiness=false. Elapsed: 22.115457738s Nov 25 17:57:51.591: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-vtnxz' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:53.587: INFO: Encountered non-retryable error while getting pod loadbalancers-8332/mutability-test-vtnxz: Get "https://35.233.152.153/api/v1/namespaces/loadbalancers-8332/pods/mutability-test-vtnxz": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:57:53.587: INFO: Pod mutability-test-vtnxz failed to be running and ready. Nov 25 17:57:53.587: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [mutability-test-vtnxz] Nov 25 17:57:53.587: INFO: Unexpected error: <*errors.errorString | 0xc00101a0f0>: { s: "failed waiting for pods to be running: timeout waiting for 1 pods to be ready", } Nov 25 17:57:53.588: FAIL: failed waiting for pods to be running: timeout waiting for 1 pods to be ready Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:314 +0x2d7 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 17:57:53.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 17:57:53.627: INFO: Output of kubectl describe svc: Nov 25 17:57:53.628: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8332 describe svc --namespace=loadbalancers-8332' Nov 25 17:57:53.768: INFO: rc: 1 Nov 25 17:57:53.769: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 17:57:53.769 STEP: Collecting events from namespace "loadbalancers-8332". 11/25/22 17:57:53.769 Nov 25 17:57:53.808: INFO: Unexpected error: failed to list events in namespace "loadbalancers-8332": <*url.Error | 0xc003818a20>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/loadbalancers-8332/events", Err: <*net.OpError | 0xc00325fe50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00391a7b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0036e67e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 17:57:53.809: FAIL: failed to list events in namespace "loadbalancers-8332": Get "https://35.233.152.153/api/v1/namespaces/loadbalancers-8332/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0033a85c0, {0xc003cd5a88, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0034861a0}, {0xc003cd5a88, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0033a8650?, {0xc003cd5a88?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00127c4b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000fc1250?, 0xc0033dd3e0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00391a240?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000fc1250?, 0x7fae060?}, {0xae73300?, 0x1?, 0x2622c40?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-8332" for this suite. 11/25/22 17:57:53.809 Nov 25 17:57:53.850: FAIL: Couldn't delete ns: "loadbalancers-8332": Delete "https://35.233.152.153/api/v1/namespaces/loadbalancers-8332": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/loadbalancers-8332", Err:(*net.OpError)(0xc00398c000)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00127c4b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000fc1180?, 0xc0000cff08?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0033d6330?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000fc1180?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\screate\san\sinternal\stype\sload\sbalancer\s\[Slow\]$'
test/e2e/network/loadbalancer.go:638 k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:638 +0x634 There were additional failures detected after the initial failure: [FAILED] Nov 25 18:26:02.166: failed to list events in namespace "loadbalancers-137": Get "https://35.233.152.153/api/v1/namespaces/loadbalancers-137/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:26:02.206: Couldn't delete ns: "loadbalancers-137": Delete "https://35.233.152.153/api/v1/namespaces/loadbalancers-137": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/loadbalancers-137", Err:(*net.OpError)(0xc00264b040)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:07:13.18 Nov 25 18:07:13.180: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 18:07:13.181 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:07:13.478 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:07:13.614 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to create an internal type load balancer [Slow] test/e2e/network/loadbalancer.go:571 STEP: creating pod to be part of service lb-internal 11/25/22 18:07:13.899 Nov 25 18:07:14.016: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 18:07:14.096: INFO: Found 0/1 pods - will retry Nov 25 18:07:16.184: INFO: Found all 1 pods Nov 25 18:07:16.184: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [lb-internal-tc75f] Nov 25 18:07:16.184: INFO: Waiting up to 2m0s for pod "lb-internal-tc75f" in namespace "loadbalancers-137" to be "running and ready" Nov 25 18:07:16.256: INFO: Pod "lb-internal-tc75f": Phase="Pending", Reason="", readiness=false. Elapsed: 72.410838ms Nov 25 18:07:16.256: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:07:18.347: INFO: Pod "lb-internal-tc75f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162721396s Nov 25 18:07:18.347: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:07:20.328: INFO: Pod "lb-internal-tc75f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143786734s Nov 25 18:07:20.328: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:07:22.331: INFO: Pod "lb-internal-tc75f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147085257s Nov 25 18:07:22.331: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:07:24.375: INFO: Pod "lb-internal-tc75f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.191255607s Nov 25 18:07:24.375: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:07:26.314: INFO: Pod "lb-internal-tc75f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.130549804s Nov 25 18:07:26.315: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:07:28.390: INFO: Pod "lb-internal-tc75f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.20631029s Nov 25 18:07:28.390: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:07:30.418: INFO: Pod "lb-internal-tc75f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.234046797s Nov 25 18:07:30.418: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:07:32.390: INFO: Pod "lb-internal-tc75f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.205582672s Nov 25 18:07:32.390: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' to be 'Running' but was 'Pending' Nov 25 18:07:34.315: INFO: Pod "lb-internal-tc75f": Phase="Running", Reason="", readiness=false. Elapsed: 18.130931417s Nov 25 18:07:34.315: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:24 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:24 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:14 +0000 UTC }] Nov 25 18:07:36.311: INFO: Pod "lb-internal-tc75f": Phase="Running", Reason="", readiness=false. Elapsed: 20.126706575s Nov 25 18:07:36.311: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:24 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:24 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:14 +0000 UTC }] Nov 25 18:07:38.339: INFO: Pod "lb-internal-tc75f": Phase="Running", Reason="", readiness=false. Elapsed: 22.15555955s Nov 25 18:07:38.340: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:24 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:24 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:14 +0000 UTC }] Nov 25 18:07:40.322: INFO: Pod "lb-internal-tc75f": Phase="Running", Reason="", readiness=false. Elapsed: 24.138106222s Nov 25 18:07:40.322: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:24 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:24 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:14 +0000 UTC }] Nov 25 18:07:42.314: INFO: Pod "lb-internal-tc75f": Phase="Running", Reason="", readiness=false. Elapsed: 26.129953224s Nov 25 18:07:42.314: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:24 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:24 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:14 +0000 UTC }] Nov 25 18:07:44.356: INFO: Pod "lb-internal-tc75f": Phase="Running", Reason="", readiness=false. Elapsed: 28.171733569s Nov 25 18:07:44.356: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-tc75f' on 'bootstrap-e2e-minion-group-11zh' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:24 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:24 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:07:14 +0000 UTC }] Nov 25 18:07:46.309: INFO: Pod "lb-internal-tc75f": Phase="Running", Reason="", readiness=true. Elapsed: 30.124985007s Nov 25 18:07:46.309: INFO: Pod "lb-internal-tc75f" satisfied condition "running and ready" Nov 25 18:07:46.309: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [lb-internal-tc75f] STEP: creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled 11/25/22 18:07:46.309 Nov 25 18:07:46.511: INFO: Waiting up to 15m0s for service "lb-internal" to have a LoadBalancer STEP: hitting the internal load balancer from pod 11/25/22 18:08:58.623 Nov 25 18:08:58.623: INFO: creating pod with host network Nov 25 18:08:58.623: INFO: Creating new host exec pod Nov 25 18:08:58.790: INFO: Waiting up to 5m0s for pod "ilb-host-exec" in namespace "loadbalancers-137" to be "running and ready" Nov 25 18:08:58.873: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 82.592785ms Nov 25 18:08:58.873: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:00.949: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158931492s Nov 25 18:09:00.949: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:02.930: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139811837s Nov 25 18:09:02.930: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:04.920: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130228654s Nov 25 18:09:04.920: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:06.936: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145865329s Nov 25 18:09:06.936: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:08.938: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 10.14751996s Nov 25 18:09:08.938: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:10.937: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 12.147263451s Nov 25 18:09:10.937: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:12.936: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 14.145773752s Nov 25 18:09:12.936: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:14.934: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 16.143804334s Nov 25 18:09:14.934: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:16.946: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 18.155590442s Nov 25 18:09:16.946: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:18.975: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 20.18523377s Nov 25 18:09:18.975: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:20.957: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 22.167122321s Nov 25 18:09:20.957: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:22.964: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 24.173608839s Nov 25 18:09:22.964: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:25.004: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 26.213613935s Nov 25 18:09:25.004: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:26.925: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 28.134668726s Nov 25 18:09:26.925: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:28.955: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 30.164755469s Nov 25 18:09:28.955: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:30.937: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 32.146399454s Nov 25 18:09:30.937: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:32.930: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 34.140215413s Nov 25 18:09:32.930: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:34.935: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 36.145090267s Nov 25 18:09:34.935: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:36.940: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 38.149801095s Nov 25 18:09:36.940: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:38.928: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 40.138180052s Nov 25 18:09:38.928: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:40.951: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 42.160547367s Nov 25 18:09:40.951: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:42.986: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 44.195648543s Nov 25 18:09:42.986: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:44.941: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 46.150769679s Nov 25 18:09:44.941: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:46.918: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 48.127942241s Nov 25 18:09:46.918: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:48.957: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 50.166334761s Nov 25 18:09:48.957: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:50.927: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 52.136655313s Nov 25 18:09:50.927: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:52.937: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 54.146343403s Nov 25 18:09:52.937: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:54.938: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 56.147358755s Nov 25 18:09:54.938: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:56.932: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 58.141699515s Nov 25 18:09:56.932: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:09:59.175: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.384494644s Nov 25 18:09:59.175: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:00.925: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.135045419s Nov 25 18:10:00.925: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:02.978: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.188203064s Nov 25 18:10:02.978: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:04.992: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.202019529s Nov 25 18:10:04.992: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:06.934: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.143787172s Nov 25 18:10:06.934: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:08.935: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.144868228s Nov 25 18:10:08.935: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:10.968: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.177397182s Nov 25 18:10:10.968: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:12.927: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.137284107s Nov 25 18:10:12.927: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:14.948: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.158261032s Nov 25 18:10:14.948: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:16.961: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.171053389s Nov 25 18:10:16.961: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:18.929: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.138526817s Nov 25 18:10:18.929: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:20.951: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.160686632s Nov 25 18:10:20.951: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:22.989: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.198765581s Nov 25 18:10:22.989: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:24.934: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.144130475s Nov 25 18:10:24.934: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:26.957: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.167187485s Nov 25 18:10:26.957: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:28.962: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.171812559s Nov 25 18:10:28.962: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:30.930: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.140065954s Nov 25 18:10:30.930: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:32.941: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.15119924s Nov 25 18:10:32.941: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:34.944: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.153880473s Nov 25 18:10:34.944: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:36.944: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.153998956s Nov 25 18:10:36.944: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:39.043: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.253059682s Nov 25 18:10:39.043: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:40.927: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.136865868s Nov 25 18:10:40.927: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:42.915: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.124940373s Nov 25 18:10:42.915: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:44.926: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.135326416s Nov 25 18:10:44.926: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:46.928: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.137528478s Nov 25 18:10:46.928: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:49.066: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.276169225s Nov 25 18:10:49.066: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:50.939: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.149048942s Nov 25 18:10:50.939: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:53.021: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.230873405s Nov 25 18:10:53.021: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:54.914: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.12410577s Nov 25 18:10:54.914: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:56.955: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.165281017s Nov 25 18:10:56.955: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:10:58.985: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.194579219s Nov 25 18:10:58.985: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:11:00.917: INFO: Pod "ilb-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 2m2.12696797s Nov 25 18:11:00.917: INFO: The phase of Pod ilb-host-exec is Running (Ready = true) Nov 25 18:11:00.917: INFO: Pod "ilb-host-exec" satisfied condition "running and ready" Nov 25 18:11:00.917: INFO: Waiting up to 15m0s for service "lb-internal"'s internal LB to respond to requests Nov 25 18:11:00.917: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:11:01.690: INFO: rc: 7 Nov 25 18:11:01.690: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 1 ms: Connection refused command terminated with exit code 7 error: exit status 7 Nov 25 18:11:21.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:11:22.457: INFO: rc: 7 Nov 25 18:11:22.457: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 0 ms: Connection refused command terminated with exit code 7 error: exit status 7 Nov 25 18:11:41.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:11:42.410: INFO: rc: 7 Nov 25 18:11:42.410: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 1 ms: Connection refused command terminated with exit code 7 error: exit status 7 Nov 25 18:12:01.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:12:02.348: INFO: rc: 7 Nov 25 18:12:02.348: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 1 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 5m0.634s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 3m15.191s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:12:21.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:12:22.157: INFO: rc: 1 Nov 25 18:12:22.157: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 5m20.636s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m20.003s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 3m35.193s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:12:41.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:12:42.184: INFO: rc: 1 Nov 25 18:12:42.184: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 5m40.638s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m40.005s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 3m55.195s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:13:01.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:13:01.806: INFO: rc: 1 Nov 25 18:13:01.806: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 6m0.642s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 6m0.009s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 4m15.199s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:13:21.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:13:21.810: INFO: rc: 1 Nov 25 18:13:21.810: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 6m20.645s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 6m20.012s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 4m35.202s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:13:41.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:13:41.813: INFO: rc: 1 Nov 25 18:13:41.813: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 6m40.647s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 6m40.014s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 4m55.204s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:14:01.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:14:01.812: INFO: rc: 1 Nov 25 18:14:01.812: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 7m0.651s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 7m0.018s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 5m15.208s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:14:21.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:14:21.816: INFO: rc: 1 Nov 25 18:14:21.816: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 7m20.653s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 7m20.02s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 5m35.21s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:14:41.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:14:41.816: INFO: rc: 1 Nov 25 18:14:41.816: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 7m40.656s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 7m40.023s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 5m55.213s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:15:01.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:15:01.825: INFO: rc: 1 Nov 25 18:15:01.825: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 8m0.659s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 8m0.026s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 6m15.216s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:15:21.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:15:22.035: INFO: rc: 1 Nov 25 18:15:22.035: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 8m20.66s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 8m20.027s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 6m35.217s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:15:41.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:15:42.069: INFO: rc: 1 Nov 25 18:15:42.070: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 8m40.663s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 8m40.029s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 6m55.219s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:16:01.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:16:02.039: INFO: rc: 1 Nov 25 18:16:02.039: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 9m0.665s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 9m0.032s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 7m15.222s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:16:21.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:16:22.077: INFO: rc: 1 Nov 25 18:16:22.077: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 9m20.666s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 9m20.033s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 7m35.223s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:16:41.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:16:42.142: INFO: rc: 1 Nov 25 18:16:42.142: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 9m40.669s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 9m40.036s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 7m55.226s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:17:01.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:17:04.996: INFO: rc: 7 Nov 25 18:17:04.996: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 0 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 10m0.672s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 10m0.039s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 8m15.229s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:17:21.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:17:22.478: INFO: rc: 7 Nov 25 18:17:22.478: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 0 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 10m20.675s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 10m20.042s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 8m35.232s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:17:41.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:17:42.463: INFO: rc: 7 Nov 25 18:17:42.463: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 0 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 10m40.678s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 10m40.045s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 8m55.235s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:18:01.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:18:01.827: INFO: rc: 1 Nov 25 18:18:01.827: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 11m0.68s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 11m0.047s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 9m15.237s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:18:21.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:18:21.820: INFO: rc: 1 Nov 25 18:18:21.820: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 11m20.682s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 11m20.048s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 9m35.238s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:18:41.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:18:41.817: INFO: rc: 1 Nov 25 18:18:41.817: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 11m40.684s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 11m40.051s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 9m55.241s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:19:01.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:19:01.807: INFO: rc: 1 Nov 25 18:19:01.807: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 12m0.687s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 12m0.054s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 10m15.244s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:19:21.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:19:21.823: INFO: rc: 1 Nov 25 18:19:21.823: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 12m20.689s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 12m20.056s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 10m35.246s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:19:41.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:19:41.815: INFO: rc: 1 Nov 25 18:19:41.815: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 12m40.692s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 12m40.059s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 10m55.249s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:20:01.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:20:01.814: INFO: rc: 1 Nov 25 18:20:01.814: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 13m0.694s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 13m0.061s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 11m15.251s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:20:21.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:20:21.812: INFO: rc: 1 Nov 25 18:20:21.812: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 13m20.697s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 13m20.064s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 11m35.254s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:20:41.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:20:41.822: INFO: rc: 1 Nov 25 18:20:41.822: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 13m40.7s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 13m40.067s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 11m55.257s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:21:01.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:21:01.816: INFO: rc: 1 Nov 25 18:21:01.816: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 14m0.702s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 14m0.069s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 12m15.259s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:21:21.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:21:21.815: INFO: rc: 1 Nov 25 18:21:21.815: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 14m20.705s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 14m20.072s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 12m35.262s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:21:41.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:21:41.811: INFO: rc: 1 Nov 25 18:21:41.811: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 14m40.709s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 14m40.075s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 12m55.266s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:22:01.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:22:01.817: INFO: rc: 1 Nov 25 18:22:01.817: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 15m0.712s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 15m0.078s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 13m15.268s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:22:21.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:22:21.807: INFO: rc: 1 Nov 25 18:22:21.807: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 15m20.714s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 15m20.081s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 13m35.271s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:22:41.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:22:41.820: INFO: rc: 1 Nov 25 18:22:41.820: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 15m40.716s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 15m40.083s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 13m55.273s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:23:01.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:23:01.827: INFO: rc: 1 Nov 25 18:23:01.827: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 16m0.72s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 16m0.087s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 14m15.277s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:23:21.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:23:21.809: INFO: rc: 1 Nov 25 18:23:21.809: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 16m20.724s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 16m20.091s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 14m35.281s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:23:41.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:23:42.026: INFO: rc: 1 Nov 25 18:23:42.026: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 16m40.726s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 16m40.093s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 14m55.283s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:24:01.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:24:02.022: INFO: rc: 1 Nov 25 18:24:02.022: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 17m0.728s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 17m0.095s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 15m15.285s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:24:21.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:24:22.093: INFO: rc: 1 Nov 25 18:24:22.093: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 17m20.73s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 17m20.097s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 15m35.287s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:24:41.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:24:42.072: INFO: rc: 1 Nov 25 18:24:42.072: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 17m40.733s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 17m40.1s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 15m55.29s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:25:01.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:25:02.040: INFO: rc: 1 Nov 25 18:25:02.040: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 18m0.735s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 18m0.102s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 16m15.292s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:25:21.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:25:22.070: INFO: rc: 1 Nov 25 18:25:22.070: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 18m20.737s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 18m20.104s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 16m35.294s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:25:41.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:25:42.042: INFO: rc: 1 Nov 25 18:25:42.042: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 18m40.739s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 18m40.106s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 16m55.296s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 1675 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002d441e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004071d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0008caa80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:26:01.691: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:26:01.808: INFO: rc: 1 Nov 25 18:26:01.808: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 Nov 25 18:26:01.808: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 18:26:01.923: INFO: rc: 1 Nov 25 18:26:01.923: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 35.233.152.153 was refused - did you specify the right host or port? error: exit status 1 Nov 25 18:26:01.923: FAIL: ginkgo.Failed to hit ILB IP, err: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:638 +0x634 STEP: Clean up loadbalancer service 11/25/22 18:26:01.923 STEP: Delete service with finalizer 11/25/22 18:26:01.923 Nov 25 18:26:01.963: FAIL: Failed to delete service loadbalancers-137/lb-internal Full Stack Trace k8s.io/kubernetes/test/e2e/framework/service.WaitForServiceDeletedWithFinalizer({0x801de88, 0xc002f84820}, {0xc003ab6378, 0x11}, {0xc00375da40, 0xb}) test/e2e/framework/service/wait.go:37 +0x185 k8s.io/kubernetes/test/e2e/network.glob..func19.6.3() test/e2e/network/loadbalancer.go:602 +0x67 panic({0x70eb7e0, 0xc000e0c620}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Failf({0x7679434?, 0x4?}, {0xc004071e58?, 0x44?, 0xc004071f40?}) test/e2e/framework/log.go:49 +0x12c k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:638 +0x634 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 18:26:01.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 18:26:02.003: INFO: Output of kubectl describe svc: Nov 25 18:26:02.003: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-137 describe svc --namespace=loadbalancers-137' Nov 25 18:26:02.125: INFO: rc: 1 Nov 25 18:26:02.125: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:26:02.126 STEP: Collecting events from namespace "loadbalancers-137". 11/25/22 18:26:02.126 Nov 25 18:26:02.166: INFO: Unexpected error: failed to list events in namespace "loadbalancers-137": <*url.Error | 0xc0026182d0>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/loadbalancers-137/events", Err: <*net.OpError | 0xc004598e60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0052b8a50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0011135c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:26:02.166: FAIL: failed to list events in namespace "loadbalancers-137": Get "https://35.233.152.153/api/v1/namespaces/loadbalancers-137/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0006925c0, {0xc002f50120, 0x11}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002f84820}, {0xc002f50120, 0x11}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc000692650?, {0xc002f50120?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000b344b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00357e240?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00357e240?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-137" for this suite. 11/25/22 18:26:02.167 Nov 25 18:26:02.206: FAIL: Couldn't delete ns: "loadbalancers-137": Delete "https://35.233.152.153/api/v1/namespaces/loadbalancers-137": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/loadbalancers-137", Err:(*net.OpError)(0xc00264b040)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000b344b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00357e1c0?, 0xc00061fdd0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00061fe90?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00357e1c0?, 0xc00061ef30?}, {0xae73300?, 0xc00061ef20?, 0xc00061f0c0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\sswitch\ssession\saffinity\sfor\sLoadBalancer\sservice\swith\sESIPP\soff\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/network/service.go:3978 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x760dada?, {0x801de88, 0xc00198a680}, 0xc000757900, 0x1) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithTransition(...) test/e2e/network/service.go:3962 k8s.io/kubernetes/test/e2e/network.glob..func19.11() test/e2e/network/loadbalancer.go:809 +0xf3 There were additional failures detected after the initial failure: [FAILED] Nov 25 17:58:22.205: Couldn't delete ns: "loadbalancers-924": Delete "https://35.233.152.153/api/v1/namespaces/loadbalancers-924": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/loadbalancers-924", Err:(*net.OpError)(0xc002814af0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 17:57:29.128 Nov 25 17:57:29.128: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 17:57:29.13 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 17:57:29.393 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 17:57:29.486 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [LinuxOnly] test/e2e/network/loadbalancer.go:802 STEP: creating service in namespace loadbalancers-924 11/25/22 17:57:29.635 STEP: creating service affinity-lb-transition in namespace loadbalancers-924 11/25/22 17:57:29.635 STEP: creating replication controller affinity-lb-transition in namespace loadbalancers-924 11/25/22 17:57:29.728 I1125 17:57:29.792726 8242 runners.go:193] Created replication controller with name: affinity-lb-transition, namespace: loadbalancers-924, replica count: 3 I1125 17:57:32.843123 8242 runners.go:193] affinity-lb-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 17:57:35.843379 8242 runners.go:193] affinity-lb-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 17:57:38.844135 8242 runners.go:193] affinity-lb-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 17:57:41.844745 8242 runners.go:193] affinity-lb-transition Pods: 3 out of 3 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1125 17:57:44.844923 8242 runners.go:193] affinity-lb-transition Pods: 3 out of 3 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1125 17:57:47.845533 8242 runners.go:193] affinity-lb-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 17:57:47.845553 8242 runners.go:193] Logging node info for node bootstrap-e2e-minion-group-11zh I1125 17:57:47.902146 8242 runners.go:193] Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-11zh 51498931-fa93-403b-99dc-c4f0f6b81384 1490 0 2022-11-25 17:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-11zh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-11zh topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-729":"bootstrap-e2e-minion-group-11zh"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 17:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 17:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 17:55:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 17:55:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 17:57:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-minion-group-11zh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 17:57:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.210.102,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-11zh.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-11zh.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aa71c72a5648d3deaeffa3d5a75ed1ea,SystemUUID:aa71c72a-5648-d3de-aeff-a3d5a75ed1ea,BootID:4402c9e1-cf2e-4e88-9a9b-3152017f4dc0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} I1125 17:57:47.902549 8242 runners.go:193] Logging kubelet events for node bootstrap-e2e-minion-group-11zh I1125 17:57:47.985263 8242 runners.go:193] Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-11zh I1125 17:57:48.082060 8242 runners.go:193] kube-proxy-bootstrap-e2e-minion-group-11zh started at 2022-11-25 17:55:35 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.082089 8242 runners.go:193] Container kube-proxy ready: true, restart count 1 I1125 17:57:48.082093 8242 runners.go:193] metadata-proxy-v0.1-gzp2t started at 2022-11-25 17:55:36 +0000 UTC (0+2 container statuses recorded) I1125 17:57:48.082097 8242 runners.go:193] Container metadata-proxy ready: true, restart count 0 I1125 17:57:48.082100 8242 runners.go:193] Container prometheus-to-sd-exporter ready: true, restart count 0 I1125 17:57:48.082102 8242 runners.go:193] volume-snapshot-controller-0 started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.082106 8242 runners.go:193] Container volume-snapshot-controller ready: true, restart count 2 I1125 17:57:48.082108 8242 runners.go:193] hostexec-bootstrap-e2e-minion-group-11zh-g52kk started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.082111 8242 runners.go:193] Container agnhost-container ready: true, restart count 0 I1125 17:57:48.082113 8242 runners.go:193] pod-configmaps-5ec8928e-ebc0-45ba-a6e5-ed8f240d753b started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.082116 8242 runners.go:193] Container agnhost-container ready: false, restart count 0 I1125 17:57:48.082119 8242 runners.go:193] l7-default-backend-8549d69d99-c2mnz started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.082122 8242 runners.go:193] Container default-http-backend ready: true, restart count 0 I1125 17:57:48.082127 8242 runners.go:193] external-provisioner-kpg9c started at 2022-11-25 17:57:30 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.082130 8242 runners.go:193] Container nfs-provisioner ready: true, restart count 0 I1125 17:57:48.082132 8242 runners.go:193] csi-mockplugin-attacher-0 started at 2022-11-25 17:57:31 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.082135 8242 runners.go:193] Container csi-attacher ready: true, restart count 1 I1125 17:57:48.082137 8242 runners.go:193] pod-subpath-test-preprovisionedpv-8d5r started at 2022-11-25 17:57:41 +0000 UTC (1+1 container statuses recorded) I1125 17:57:48.082141 8242 runners.go:193] Init container init-volume-preprovisionedpv-8d5r ready: true, restart count 0 I1125 17:57:48.082143 8242 runners.go:193] Container test-container-subpath-preprovisionedpv-8d5r ready: false, restart count 0 I1125 17:57:48.082145 8242 runners.go:193] var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0 started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.082149 8242 runners.go:193] Container dapi-container ready: false, restart count 0 I1125 17:57:48.082152 8242 runners.go:193] konnectivity-agent-r2744 started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.082156 8242 runners.go:193] Container konnectivity-agent ready: true, restart count 0 I1125 17:57:48.082241 8242 runners.go:193] test-hostpath-type-qn5c5 started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.082262 8242 runners.go:193] Container host-path-testing ready: false, restart count 0 I1125 17:57:48.082266 8242 runners.go:193] affinity-lb-transition-6qqjw started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.082270 8242 runners.go:193] Container affinity-lb-transition ready: true, restart count 1 I1125 17:57:48.082273 8242 runners.go:193] csi-mockplugin-0 started at 2022-11-25 17:57:31 +0000 UTC (0+3 container statuses recorded) I1125 17:57:48.082276 8242 runners.go:193] Container csi-provisioner ready: true, restart count 0 I1125 17:57:48.082285 8242 runners.go:193] Container driver-registrar ready: true, restart count 0 I1125 17:57:48.082287 8242 runners.go:193] Container mock ready: true, restart count 0 I1125 17:57:48.339797 8242 runners.go:193] Latency metrics for node bootstrap-e2e-minion-group-11zh I1125 17:57:48.339811 8242 runners.go:193] Logging node info for node bootstrap-e2e-minion-group-4mzt I1125 17:57:48.394996 8242 runners.go:193] Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4mzt 22649193-0b27-417f-8621-b5ea24d332ed 1560 0 2022-11-25 17:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4mzt kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4mzt topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-7709":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-volumemode-4509":"bootstrap-e2e-minion-group-4mzt","csi-mock-csi-mock-volumes-5248":"bootstrap-e2e-minion-group-4mzt"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 17:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 17:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 17:55:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 17:57:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 17:57:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-minion-group-4mzt,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:46 +0000 UTC,LastTransitionTime:2022-11-25 17:55:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:46 +0000 UTC,LastTransitionTime:2022-11-25 17:55:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:46 +0000 UTC,LastTransitionTime:2022-11-25 17:55:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 17:57:46 +0000 UTC,LastTransitionTime:2022-11-25 17:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.242.250,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4mzt.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4mzt.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1e3433884817dce77b68706e93091a61,SystemUUID:1e343388-4817-dce7-7b68-706e93091a61,BootID:17688f51-d17a-4208-ac49-46ee5ba23c29,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-7709^a8b285e7-6cea-11ed-bbb6-5acdec970e3c],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volumemode-4509^a8891b85-6cea-11ed-abfe-e27e4eb3b800,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7709^a8b285e7-6cea-11ed-bbb6-5acdec970e3c,DevicePath:,},},Config:nil,},} I1125 17:57:48.395599 8242 runners.go:193] Logging kubelet events for node bootstrap-e2e-minion-group-4mzt I1125 17:57:48.442843 8242 runners.go:193] Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4mzt I1125 17:57:48.540499 8242 runners.go:193] kube-proxy-bootstrap-e2e-minion-group-4mzt started at 2022-11-25 17:55:33 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.540521 8242 runners.go:193] Container kube-proxy ready: true, restart count 1 I1125 17:57:48.540525 8242 runners.go:193] test-hostpath-type-kvlz4 started at <nil> (0+0 container statuses recorded) I1125 17:57:48.540559 8242 runners.go:193] hostexec-bootstrap-e2e-minion-group-4mzt-9bww2 started at <nil> (0+0 container statuses recorded) I1125 17:57:48.540567 8242 runners.go:193] csi-hostpathplugin-0 started at <nil> (0+0 container statuses recorded) I1125 17:57:48.540574 8242 runners.go:193] coredns-6d97d5ddb-mvdlj started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.540579 8242 runners.go:193] Container coredns ready: true, restart count 0 I1125 17:57:48.540581 8242 runners.go:193] konnectivity-agent-57t6m started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.540584 8242 runners.go:193] Container konnectivity-agent ready: true, restart count 0 I1125 17:57:48.540587 8242 runners.go:193] mutability-test-vtnxz started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.540590 8242 runners.go:193] Container netexec ready: false, restart count 0 I1125 17:57:48.540592 8242 runners.go:193] affinity-lb-transition-kkfkl started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.540596 8242 runners.go:193] Container affinity-lb-transition ready: true, restart count 1 I1125 17:57:48.540598 8242 runners.go:193] csi-mockplugin-0 started at 2022-11-25 17:57:31 +0000 UTC (0+3 container statuses recorded) I1125 17:57:48.540601 8242 runners.go:193] Container csi-provisioner ready: false, restart count 0 I1125 17:57:48.540603 8242 runners.go:193] Container driver-registrar ready: false, restart count 0 I1125 17:57:48.540605 8242 runners.go:193] Container mock ready: false, restart count 0 I1125 17:57:48.540607 8242 runners.go:193] pod-7b14ae0d-edce-4652-aa00-260ee1d616ff started at <nil> (0+0 container statuses recorded) I1125 17:57:48.540614 8242 runners.go:193] hostpath-symlink-prep-provisioning-7857 started at 2022-11-25 17:57:37 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.540618 8242 runners.go:193] Container init-volume-provisioning-7857 ready: false, restart count 0 I1125 17:57:48.540620 8242 runners.go:193] kube-dns-autoscaler-5f6455f985-r2p5h started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.540623 8242 runners.go:193] Container autoscaler ready: true, restart count 1 I1125 17:57:48.540625 8242 runners.go:193] test-hostpath-type-9n5dc started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.540628 8242 runners.go:193] Container host-path-sh-testing ready: true, restart count 0 I1125 17:57:48.540630 8242 runners.go:193] net-tiers-svc-7xgzq started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.540633 8242 runners.go:193] Container netexec ready: false, restart count 0 I1125 17:57:48.540635 8242 runners.go:193] test-hostpath-type-h4n46 started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.540637 8242 runners.go:193] Container host-path-testing ready: true, restart count 0 I1125 17:57:48.540639 8242 runners.go:193] test-hostpath-type-c2qxf started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.540642 8242 runners.go:193] Container host-path-testing ready: false, restart count 0 I1125 17:57:48.540644 8242 runners.go:193] csi-hostpathplugin-0 started at 2022-11-25 17:57:31 +0000 UTC (0+7 container statuses recorded) I1125 17:57:48.540647 8242 runners.go:193] Container csi-attacher ready: false, restart count 0 I1125 17:57:48.540649 8242 runners.go:193] Container csi-provisioner ready: false, restart count 0 I1125 17:57:48.540651 8242 runners.go:193] Container csi-resizer ready: false, restart count 0 I1125 17:57:48.540653 8242 runners.go:193] Container csi-snapshotter ready: false, restart count 0 I1125 17:57:48.540655 8242 runners.go:193] Container hostpath ready: false, restart count 0 I1125 17:57:48.540657 8242 runners.go:193] Container liveness-probe ready: false, restart count 0 I1125 17:57:48.540663 8242 runners.go:193] Container node-driver-registrar ready: false, restart count 0 I1125 17:57:48.540665 8242 runners.go:193] csi-hostpathplugin-0 started at 2022-11-25 17:57:32 +0000 UTC (0+7 container statuses recorded) I1125 17:57:48.540668 8242 runners.go:193] Container csi-attacher ready: false, restart count 0 I1125 17:57:48.540670 8242 runners.go:193] Container csi-provisioner ready: false, restart count 0 I1125 17:57:48.540672 8242 runners.go:193] Container csi-resizer ready: false, restart count 0 I1125 17:57:48.540674 8242 runners.go:193] Container csi-snapshotter ready: false, restart count 0 I1125 17:57:48.540676 8242 runners.go:193] Container hostpath ready: false, restart count 0 I1125 17:57:48.540678 8242 runners.go:193] Container liveness-probe ready: false, restart count 0 I1125 17:57:48.540680 8242 runners.go:193] Container node-driver-registrar ready: false, restart count 0 I1125 17:57:48.540682 8242 runners.go:193] metadata-proxy-v0.1-27ttr started at 2022-11-25 17:55:34 +0000 UTC (0+2 container statuses recorded) I1125 17:57:48.540685 8242 runners.go:193] Container metadata-proxy ready: true, restart count 0 I1125 17:57:48.540687 8242 runners.go:193] Container prometheus-to-sd-exporter ready: true, restart count 0 I1125 17:57:48.540689 8242 runners.go:193] csi-mockplugin-0 started at 2022-11-25 17:57:30 +0000 UTC (0+4 container statuses recorded) I1125 17:57:48.540692 8242 runners.go:193] Container busybox ready: false, restart count 0 I1125 17:57:48.540694 8242 runners.go:193] Container csi-provisioner ready: false, restart count 0 I1125 17:57:48.540695 8242 runners.go:193] Container driver-registrar ready: false, restart count 0 I1125 17:57:48.540697 8242 runners.go:193] Container mock ready: false, restart count 0 I1125 17:57:48.540699 8242 runners.go:193] test-hostpath-type-h2l7v started at 2022-11-25 17:57:37 +0000 UTC (0+1 container statuses recorded) I1125 17:57:48.540702 8242 runners.go:193] Container host-path-testing ready: false, restart count 0 I1125 17:57:48.540704 8242 runners.go:193] pod-subpath-test-dynamicpv-l45p started at 2022-11-25 17:57:43 +0000 UTC (1+1 container statuses recorded) I1125 17:57:48.540707 8242 runners.go:193] Init container init-volume-dynamicpv-l45p ready: false, restart count 0 I1125 17:57:48.540710 8242 runners.go:193] Container test-container-subpath-dynamicpv-l45p ready: false, restart count 0 I1125 17:57:50.134844 8242 runners.go:193] Latency metrics for node bootstrap-e2e-minion-group-4mzt I1125 17:57:50.134858 8242 runners.go:193] Logging node info for node bootstrap-e2e-minion-group-n7kw I1125 17:57:50.180025 8242 runners.go:193] Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-n7kw 7d1d07a4-95bb-4dd3-9e9f-ddfa4fa14b70 1435 0 2022-11-25 17:55:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-n7kw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 17:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 17:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 17:55:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 17:55:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 17:57:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-minion-group-n7kw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 17:57:40 +0000 UTC,LastTransitionTime:2022-11-25 17:55:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.46.68,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-n7kw.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-n7kw.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:965ed4b1243051174426c5c2fe243ef2,SystemUUID:965ed4b1-2430-5117-4426-c5c2fe243ef2,BootID:ff2bb7b8-8a99-4325-b6b7-5f7a4db4207d,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} I1125 17:57:50.180733 8242 runners.go:193] Logging kubelet events for node bootstrap-e2e-minion-group-n7kw I1125 17:57:50.223413 8242 runners.go:193] Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n7kw I1125 17:57:50.277946 8242 runners.go:193] hostexec-bootstrap-e2e-minion-group-n7kw-5chqs started at <nil> (0+0 container statuses recorded) I1125 17:57:50.277997 8242 runners.go:193] hostexec-bootstrap-e2e-minion-group-n7kw-9ckkg started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:50.278005 8242 runners.go:193] Container agnhost-container ready: true, restart count 1 I1125 17:57:50.278009 8242 runners.go:193] pod-53b5be5e-7634-41e1-b41b-e7d8e44ebc0b started at 2022-11-25 17:57:31 +0000 UTC (0+1 container statuses recorded) I1125 17:57:50.278013 8242 runners.go:193] Container write-pod ready: false, restart count 0 I1125 17:57:50.278016 8242 runners.go:193] pod-subpath-test-inlinevolume-56rr started at 2022-11-25 17:57:35 +0000 UTC (1+2 container statuses recorded) I1125 17:57:50.278019 8242 runners.go:193] Init container init-volume-inlinevolume-56rr ready: true, restart count 0 I1125 17:57:50.278022 8242 runners.go:193] Container test-container-subpath-inlinevolume-56rr ready: true, restart count 0 I1125 17:57:50.278024 8242 runners.go:193] Container test-container-volume-inlinevolume-56rr ready: true, restart count 0 I1125 17:57:50.278027 8242 runners.go:193] volume-prep-provisioning-2434 started at 2022-11-25 17:57:40 +0000 UTC (0+1 container statuses recorded) I1125 17:57:50.278030 8242 runners.go:193] Container init-volume-provisioning-2434 ready: false, restart count 0 I1125 17:57:50.278032 8242 runners.go:193] pod-subpath-test-preprovisionedpv-w69n started at 2022-11-25 17:57:40 +0000 UTC (1+1 container statuses recorded) I1125 17:57:50.278036 8242 runners.go:193] Init container init-volume-preprovisionedpv-w69n ready: false, restart count 0 I1125 17:57:50.278038 8242 runners.go:193] Container test-container-subpath-preprovisionedpv-w69n ready: false, restart count 0 I1125 17:57:50.278044 8242 runners.go:193] hostexec-bootstrap-e2e-minion-group-n7kw-hgcqn started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:50.278047 8242 runners.go:193] Container agnhost-container ready: true, restart count 1 I1125 17:57:50.278049 8242 runners.go:193] affinity-lb-transition-wj98m started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:50.278053 8242 runners.go:193] Container affinity-lb-transition ready: true, restart count 1 I1125 17:57:50.278055 8242 runners.go:193] pod-subpath-test-preprovisionedpv-j7tt started at <nil> (0+0 container statuses recorded) I1125 17:57:50.278063 8242 runners.go:193] konnectivity-agent-979vp started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) I1125 17:57:50.278066 8242 runners.go:193] Container konnectivity-agent ready: true, restart count 0 I1125 17:57:50.278069 8242 runners.go:193] coredns-6d97d5ddb-rj9gr started at 2022-11-25 17:55:52 +0000 UTC (0+1 container statuses recorded) I1125 17:57:50.278071 8242 runners.go:193] Container coredns ready: true, restart count 0 I1125 17:57:50.278074 8242 runners.go:193] hostexec-bootstrap-e2e-minion-group-n7kw-k5cr2 started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:50.278077 8242 runners.go:193] Container agnhost-container ready: true, restart count 1 I1125 17:57:50.278079 8242 runners.go:193] hostexec-bootstrap-e2e-minion-group-n7kw-g27lw started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:50.278082 8242 runners.go:193] Container agnhost-container ready: true, restart count 1 I1125 17:57:50.278085 8242 runners.go:193] kube-proxy-bootstrap-e2e-minion-group-n7kw started at 2022-11-25 17:55:32 +0000 UTC (0+1 container statuses recorded) I1125 17:57:50.278088 8242 runners.go:193] Container kube-proxy ready: true, restart count 1 I1125 17:57:50.278090 8242 runners.go:193] metadata-proxy-v0.1-mlww9 started at 2022-11-25 17:55:32 +0000 UTC (0+2 container statuses recorded) I1125 17:57:50.278093 8242 runners.go:193] Container metadata-proxy ready: true, restart count 0 I1125 17:57:50.278095 8242 runners.go:193] Container prometheus-to-sd-exporter ready: true, restart count 0 I1125 17:57:50.278097 8242 runners.go:193] metrics-server-v0.5.2-867b8754b9-kfmzh started at 2022-11-25 17:56:08 +0000 UTC (0+2 container statuses recorded) I1125 17:57:50.278100 8242 runners.go:193] Container metrics-server ready: true, restart count 1 I1125 17:57:50.278102 8242 runners.go:193] Container metrics-server-nanny ready: true, restart count 1 I1125 17:57:50.278104 8242 runners.go:193] hostexec-bootstrap-e2e-minion-group-n7kw-4xfsp started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) I1125 17:57:50.278107 8242 runners.go:193] Container agnhost-container ready: false, restart count 0 I1125 17:57:50.278109 8242 runners.go:193] pod-subpath-test-preprovisionedpv-htxr started at 2022-11-25 17:57:39 +0000 UTC (1+1 container statuses recorded) I1125 17:57:50.278112 8242 runners.go:193] Init container init-volume-preprovisionedpv-htxr ready: true, restart count 0 I1125 17:57:50.278114 8242 runners.go:193] Container test-container-subpath-preprovisionedpv-htxr ready: false, restart count 0 I1125 17:57:50.545423 8242 runners.go:193] Latency metrics for node bootstrap-e2e-minion-group-n7kw I1125 17:57:50.587531 8242 runners.go:193] Running kubectl logs on non-ready containers in loadbalancers-924 Nov 25 17:57:50.587: INFO: Unexpected error: failed to create replication controller with service in the namespace: loadbalancers-924: <*errors.errorString | 0xc004f189c0>: { s: "3 containers failed which is more than allowed 0", } Nov 25 17:57:50.587: FAIL: failed to create replication controller with service in the namespace: loadbalancers-924: 3 containers failed which is more than allowed 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x760dada?, {0x801de88, 0xc00198a680}, 0xc000757900, 0x1) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithTransition(...) test/e2e/network/service.go:3962 k8s.io/kubernetes/test/e2e/network.glob..func19.11() test/e2e/network/loadbalancer.go:809 +0xf3 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 17:57:50.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 17:57:50.631: INFO: Output of kubectl describe svc: Nov 25 17:57:50.631: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-924 describe svc --namespace=loadbalancers-924' Nov 25 17:57:50.991: INFO: stderr: "" Nov 25 17:57:50.991: INFO: stdout: "Name: affinity-lb-transition\nNamespace: loadbalancers-924\nLabels: <none>\nAnnotations: <none>\nSelector: name=affinity-lb-transition\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.142.37\nIPs: 10.0.142.37\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nNodePort: <unset> 30764/TCP\nEndpoints: 10.64.1.9:9376,10.64.2.21:9376,10.64.3.18:9376\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 21s service-controller Ensuring load balancer\n" Nov 25 17:57:50.991: INFO: Name: affinity-lb-transition Namespace: loadbalancers-924 Labels: <none> Annotations: <none> Selector: name=affinity-lb-transition Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.142.37 IPs: 10.0.142.37 Port: <unset> 80/TCP TargetPort: 9376/TCP NodePort: <unset> 30764/TCP Endpoints: 10.64.1.9:9376,10.64.2.21:9376,10.64.3.18:9376 Session Affinity: ClientIP External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 21s service-controller Ensuring load balancer [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 17:57:50.992 STEP: Collecting events from namespace "loadbalancers-924". 11/25/22 17:57:50.992 STEP: Found 25 events. 11/25/22 17:57:51.034 Nov 25 17:57:51.034: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-lb-transition-6qqjw: { } Scheduled: Successfully assigned loadbalancers-924/affinity-lb-transition-6qqjw to bootstrap-e2e-minion-group-11zh Nov 25 17:57:51.035: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-lb-transition-kkfkl: { } Scheduled: Successfully assigned loadbalancers-924/affinity-lb-transition-kkfkl to bootstrap-e2e-minion-group-4mzt Nov 25 17:57:51.035: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-lb-transition-wj98m: { } Scheduled: Successfully assigned loadbalancers-924/affinity-lb-transition-wj98m to bootstrap-e2e-minion-group-n7kw Nov 25 17:57:51.035: INFO: At 2022-11-25 17:57:29 +0000 UTC - event for affinity-lb-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-transition-wj98m Nov 25 17:57:51.035: INFO: At 2022-11-25 17:57:29 +0000 UTC - event for affinity-lb-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-transition-kkfkl Nov 25 17:57:51.035: INFO: At 2022-11-25 17:57:29 +0000 UTC - event for affinity-lb-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-transition-6qqjw Nov 25 17:57:51.035: INFO: At 2022-11-25 17:57:29 +0000 UTC - event for affinity-lb-transition: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 17:57:51.035: INFO: At 2022-11-25 17:57:32 +0000 UTC - event for affinity-lb-transition-wj98m: {kubelet bootstrap-e2e-minion-group-n7kw} Pulling: Pulling image "registry.k8s.io/e2e-test-images/agnhost:2.43" Nov 25 17:57:51.035: INFO: At 2022-11-25 17:57:33 +0000 UTC - event for affinity-lb-transition-wj98m: {kubelet bootstrap-e2e-minion-group-n7kw} Started: Started container affinity-lb-transition Nov 25 17:57:51.035: INFO: At 2022-11-25 17:57:33 +0000 UTC - event for affinity-lb-transition-wj98m: {kubelet bootstrap-e2e-minion-group-n7kw} Created: Created container affinity-lb-transition Nov 25 17:57:51.035: INFO: At 2022-11-25 17:57:33 +0000 UTC - event for affinity-lb-transition-wj98m: {kubelet bootstrap-e2e-minion-group-n7kw} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/agnhost:2.43" in 1.096944844s (1.096966651s including waiting) Nov 25 17:57:51.035: INFO: At 2022-11-25 17:57:34 +0000 UTC - event for affinity-lb-transition-kkfkl: {kubelet bootstrap-e2e-minion-group-4mzt} Pulling: Pulling image "registry.k8s.io/e2e-test-images/agnhost:2.43" Nov 25 17:57:51.035: INFO: At 2022-11-25 17:57:34 +0000 UTC - event for affinity-lb-transition-wj98m: {kubelet bootstrap-e2e-minion-group-n7kw} Killing: Stopping container affinity-lb-transition Nov 25 17:57:51.037: INFO: At 2022-11-25 17:57:35 +0000 UTC - event for affinity-lb-transition-6qqjw: {kubelet bootstrap-e2e-minion-group-11zh} Started: Started container affinity-lb-transition Nov 25 17:57:51.037: INFO: At 2022-11-25 17:57:35 +0000 UTC - event for affinity-lb-transition-6qqjw: {kubelet bootstrap-e2e-minion-group-11zh} Created: Created container affinity-lb-transition Nov 25 17:57:51.037: INFO: At 2022-11-25 17:57:35 +0000 UTC - event for affinity-lb-transition-6qqjw: {kubelet bootstrap-e2e-minion-group-11zh} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 17:57:51.037: INFO: At 2022-11-25 17:57:35 +0000 UTC - event for affinity-lb-transition-kkfkl: {kubelet bootstrap-e2e-minion-group-4mzt} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/agnhost:2.43" in 1.641539914s (1.641549659s including waiting) Nov 25 17:57:51.037: INFO: At 2022-11-25 17:57:36 +0000 UTC - event for affinity-lb-transition-6qqjw: {kubelet bootstrap-e2e-minion-group-11zh} Killing: Stopping container affinity-lb-transition Nov 25 17:57:51.037: INFO: At 2022-11-25 17:57:36 +0000 UTC - event for affinity-lb-transition-kkfkl: {kubelet bootstrap-e2e-minion-group-4mzt} Killing: Stopping container affinity-lb-transition Nov 25 17:57:51.037: INFO: At 2022-11-25 17:57:36 +0000 UTC - event for affinity-lb-transition-kkfkl: {kubelet bootstrap-e2e-minion-group-4mzt} Started: Started container affinity-lb-transition Nov 25 17:57:51.037: INFO: At 2022-11-25 17:57:36 +0000 UTC - event for affinity-lb-transition-kkfkl: {kubelet bootstrap-e2e-minion-group-4mzt} Created: Created container affinity-lb-transition Nov 25 17:57:51.037: INFO: At 2022-11-25 17:57:37 +0000 UTC - event for affinity-lb-transition-wj98m: {kubelet bootstrap-e2e-minion-group-n7kw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 17:57:51.037: INFO: At 2022-11-25 17:57:38 +0000 UTC - event for affinity-lb-transition-wj98m: {kubelet bootstrap-e2e-minion-group-n7kw} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 17:57:51.037: INFO: At 2022-11-25 17:57:39 +0000 UTC - event for affinity-lb-transition-kkfkl: {kubelet bootstrap-e2e-minion-group-4mzt} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 17:57:51.037: INFO: At 2022-11-25 17:57:40 +0000 UTC - event for affinity-lb-transition-6qqjw: {kubelet bootstrap-e2e-minion-group-11zh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 17:57:51.079: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 17:57:51.079: INFO: affinity-lb-transition-6qqjw bootstrap-e2e-minion-group-11zh Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:51.079: INFO: affinity-lb-transition-kkfkl bootstrap-e2e-minion-group-4mzt Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:51.079: INFO: affinity-lb-transition-wj98m bootstrap-e2e-minion-group-n7kw Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:51.079: INFO: Nov 25 17:57:51.262: INFO: Logging node info for node bootstrap-e2e-master Nov 25 17:57:51.304: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master eb94e66b-ae91-494a-9e40-bf2a53869582 572 0 2022-11-25 17:55:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 17:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 17:55:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 17:55:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 17:55:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 17:55:50 +0000 UTC,LastTransitionTime:2022-11-25 17:55:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 17:55:50 +0000 UTC,LastTransitionTime:2022-11-25 17:55:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 17:55:50 +0000 UTC,LastTransitionTime:2022-11-25 17:55:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 17:55:50 +0000 UTC,LastTransitionTime:2022-11-25 17:55:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.233.152.153,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:58899ad1ba7a6711fcb2fb23af2e2912,SystemUUID:58899ad1-ba7a-6711-fcb2-fb23af2e2912,BootID:690b7c55-8447-49d5-8a09-10c87046c77c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 17:57:51.304: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 17:57:51.349: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 17:57:51.397: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 17:54:47 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.398: INFO: Container kube-scheduler ready: true, restart count 1 Nov 25 17:57:51.398: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 17:54:47 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.398: INFO: Container etcd-container ready: true, restart count 0 Nov 25 17:57:51.398: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 17:54:47 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.398: INFO: Container etcd-container ready: true, restart count 1 Nov 25 17:57:51.398: INFO: metadata-proxy-v0.1-2q8s6 started at 2022-11-25 17:55:31 +0000 UTC (0+2 container statuses recorded) Nov 25 17:57:51.398: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 17:57:51.398: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 17:57:51.398: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 17:54:47 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.398: INFO: Container kube-apiserver ready: true, restart count 0 Nov 25 17:57:51.398: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 17:54:47 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.398: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 25 17:57:51.398: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 17:55:04 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.398: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 25 17:57:51.398: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 17:55:04 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.398: INFO: Container l7-lb-controller ready: true, restart count 3 Nov 25 17:57:51.398: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 17:54:47 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.398: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 25 17:57:51.597: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 17:57:51.597: INFO: Logging node info for node bootstrap-e2e-minion-group-11zh Nov 25 17:57:51.639: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-11zh 51498931-fa93-403b-99dc-c4f0f6b81384 1654 0 2022-11-25 17:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-11zh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-11zh topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-729":"bootstrap-e2e-minion-group-11zh"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 17:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 17:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 17:55:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 17:57:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2022-11-25 17:57:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-minion-group-11zh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 17:55:39 +0000 UTC,LastTransitionTime:2022-11-25 17:55:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 17:57:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.210.102,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-11zh.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-11zh.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aa71c72a5648d3deaeffa3d5a75ed1ea,SystemUUID:aa71c72a-5648-d3de-aeff-a3d5a75ed1ea,BootID:4402c9e1-cf2e-4e88-9a9b-3152017f4dc0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-729^abef4dd6-6cea-11ed-a14b-eee2c9573b19,DevicePath:,},},Config:nil,},} Nov 25 17:57:51.639: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-11zh Nov 25 17:57:51.683: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-11zh Nov 25 17:57:51.734: INFO: csi-mockplugin-attacher-0 started at 2022-11-25 17:57:31 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.734: INFO: Container csi-attacher ready: true, restart count 1 Nov 25 17:57:51.734: INFO: pvc-volume-tester-7q9mv started at 2022-11-25 17:57:50 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.734: INFO: Container volume-tester ready: false, restart count 0 Nov 25 17:57:51.734: INFO: l7-default-backend-8549d69d99-c2mnz started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.734: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 17:57:51.734: INFO: external-provisioner-kpg9c started at 2022-11-25 17:57:30 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.734: INFO: Container nfs-provisioner ready: true, restart count 1 Nov 25 17:57:51.734: INFO: affinity-lb-transition-6qqjw started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.734: INFO: Container affinity-lb-transition ready: true, restart count 1 Nov 25 17:57:51.734: INFO: csi-mockplugin-0 started at 2022-11-25 17:57:31 +0000 UTC (0+3 container statuses recorded) Nov 25 17:57:51.734: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 17:57:51.734: INFO: Container driver-registrar ready: true, restart count 0 Nov 25 17:57:51.734: INFO: Container mock ready: true, restart count 0 Nov 25 17:57:51.734: INFO: var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0 started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.734: INFO: Container dapi-container ready: false, restart count 0 Nov 25 17:57:51.734: INFO: konnectivity-agent-r2744 started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.734: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 25 17:57:51.734: INFO: volume-snapshot-controller-0 started at 2022-11-25 17:55:48 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.734: INFO: Container volume-snapshot-controller ready: true, restart count 2 Nov 25 17:57:51.734: INFO: hostexec-bootstrap-e2e-minion-group-11zh-g52kk started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.734: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 17:57:51.734: INFO: pod-configmaps-5ec8928e-ebc0-45ba-a6e5-ed8f240d753b started at 2022-11-25 17:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.734: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 17:57:51.734: INFO: kube-proxy-bootstrap-e2e-minion-group-11zh started at 2022-11-25 17:55:35 +0000 UTC (0+1 container statuses recorded) Nov 25 17:57:51.734: INFO: Container kube-proxy ready: true, restart count 1 Nov 25 17:57:51.734: INFO: metadata-proxy-v0.1-gzp2t started at 2022-11-25 17:55:36 +0000 UTC (0+2 container statuses recorded) Nov 25 17:57:51.734: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 17:57:51.734: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 17:57:51.916: INFO: Latency metrics for node bootstrap-e2e-minion-group-11zh Nov 25 17:57:51.916: INFO: Logging node info for node bootstrap-e2e-minion-group-4mzt Nov 25 17:57:51.959: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4mzt 22649193-0b27-417f-8621-b5ea24d332ed 1634 0 2022-11-25 17:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4mzt kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4mzt topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-9":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-provisioning-7709":"bootstrap-e2e-minion-group-4mzt","csi-hostpath-volumemode-4509":"bootstrap-e2e-minion-group-4mzt","csi-mock-csi-mock-volumes-5248":"bootstrap-e2e-minion-group-4mzt"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 17:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 17:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 17:55:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 17:57:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 17:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-prow-canary/us-west1-b/bootstrap-e2e-minion-group-4mzt,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 17:55:37 +0000 UTC,LastTransitionTime:2022-11-25 17:55:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 17:55:48 +0000 UTC,LastTransitionTime:2022-11-25 17:55:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:46 +0000 UTC,LastTransitionTime:2022-11-25 17:55:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:46 +0000 UTC,LastTransitionTime:2022-11-25 17:55:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 17:57:46 +0000 UTC,LastTransitionTime:2022-11-25 17:55:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 17:57:46 +0000 UTC,LastTransitionTime:2022-11-25 17:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.242.250,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4mzt.c.k8s-jkns-e2e-prow-canary.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4mzt.c.k8s-jkns-e2e-prow-canary.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1e3433884817dce77b68706e93091a61,SystemUUID:1e343388-4817-dce7-7b68-706e93091a61,BootID:17688f51-d17a-4208-ac49-46ee5ba23c29,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-7709^a8b285e7-6cea-11ed-bbb6-5acdec970e3c],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volumemode-4509^a8891b85-6cea-11ed-abfe-e27e4eb3b800,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7709^a8b285e7-6cea-11ed-bbb6-5acdec970e3c,DevicePath:,},},Config:nil,},} Nov 25 17:57:51.959: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-4mzt Nov 25 17:57:52.005: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4mzt Nov 25 17:58:22.048: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-4mzt: error trying to reach service: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" Nov 25 17:58:22.048: INFO: Logging node info for node bootstrap-e2e-minion-group-n7kw Nov 25 17:58:22.087: INFO: Error getting node info Get "https://35.233.152.153/api/v1/nodes/bootstrap-e2e-minion-group-n7kw": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:22.087: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 17:58:22.087: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-n7kw Nov 25 17:58:22.126: INFO: Unexpected error retrieving node events Get "https://35.233.152.153/api/v1/namespaces/kube-system/events?fieldSelector=source%3Dkubelet%2CinvolvedObject.name%3Dbootstrap-e2e-minion-group-n7kw%2CinvolvedObject.kind%3DNode%2CinvolvedObject.namespace%3D": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:22.126: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-n7kw Nov 25 17:58:22.166: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-n7kw: Get "https://35.233.152.153/api/v1/nodes/bootstrap-e2e-minion-group-n7kw:10250/proxy/pods": dial tcp 35.233.152.153:443: connect: connection refused [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-924" for this suite. 11/25/22 17:58:22.166 Nov 25 17:58:22.205: FAIL: Couldn't delete ns: "loadbalancers-924": Delete "https://35.233.152.153/api/v1/namespaces/loadbalancers-924": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/loadbalancers-924", Err:(*net.OpError)(0xc002814af0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000ed44b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00158d090?, 0xc0034f3fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00158d090?, 0x0?}, {0xae73300?, 0x5?, 0xc004983128?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\sswitch\ssession\saffinity\sfor\sLoadBalancer\sservice\swith\sESIPP\son\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/network/service.go:4006 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x7638a85?, {0x801de88, 0xc0034a6000}, 0x0, 0x1) test/e2e/network/service.go:4006 +0x4a5 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithTransition(...) test/e2e/network/service.go:3962 k8s.io/kubernetes/test/e2e/network.glob..func19.9() test/e2e/network/loadbalancer.go:787 +0xf3 There were additional failures detected after the initial failure: [FAILED] Nov 25 18:13:00.283: failed to list events in namespace "loadbalancers-9355": Get "https://35.233.152.153/api/v1/namespaces/loadbalancers-9355/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:13:00.323: Couldn't delete ns: "loadbalancers-9355": Delete "https://35.233.152.153/api/v1/namespaces/loadbalancers-9355": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/loadbalancers-9355", Err:(*net.OpError)(0xc00347d6d0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:08:04.506 Nov 25 18:08:04.506: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 18:08:04.507 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:08:04.73 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:08:04.832 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly] test/e2e/network/loadbalancer.go:780 STEP: creating service in namespace loadbalancers-9355 11/25/22 18:08:05.002 STEP: creating service affinity-lb-esipp-transition in namespace loadbalancers-9355 11/25/22 18:08:05.002 STEP: creating replication controller affinity-lb-esipp-transition in namespace loadbalancers-9355 11/25/22 18:08:05.117 I1125 18:08:05.171869 8217 runners.go:193] Created replication controller with name: affinity-lb-esipp-transition, namespace: loadbalancers-9355, replica count: 3 I1125 18:08:08.273497 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:11.273820 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:14.274704 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:17.275802 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:20.276860 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:23.277162 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:26.277485 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:29.277822 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:32.278656 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:35.279805 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:38.280559 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:41.281633 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:44.282831 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:47.283252 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:50.283552 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:53.284492 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:56.285053 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:08:59.285172 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:02.286205 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:05.286652 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:08.286764 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:11.287683 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:14.287795 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:17.288659 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:20.289641 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:23.290226 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:26.291138 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:29.291307 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:32.291669 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:35.292578 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:38.292921 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:41.293645 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:44.294696 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:47.295707 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:50.295939 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:53.296931 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:56.297924 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:09:59.299083 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:02.300129 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:05.300417 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:08.301475 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:11.302630 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:14.303736 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:17.304759 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:20.304886 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:23.305134 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:26.305495 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:29.305659 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:32.306671 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:35.306881 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:38.307827 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:41.308125 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:44.308395 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:47.308771 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:50.309791 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:53.310547 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:56.311161 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 18:10:59.312022 8217 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: waiting for loadbalancer for service loadbalancers-9355/affinity-lb-esipp-transition 11/25/22 18:10:59.354 Nov 25 18:10:59.398: INFO: Waiting up to 15m0s for service "affinity-lb-esipp-transition" to have a LoadBalancer Nov 25 18:10:59.589: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:01.590: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:01.590: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:03.590: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:03.590: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:05.591: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:05.591: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:06.677: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:06.677: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:06.756: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:06.756: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:06.835: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:06.835: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:06.913: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:06.913: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:06.992: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:06.992: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:07.071: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:07.071: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:07.150: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:07.150: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:07.229: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:07.229: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:07.308: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:07.308: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:07.386: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:07.386: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:07.465: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:07.465: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:07.544: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:07.544: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:07.544: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:07.544: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:07.544: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:07.544: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:07.544: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:07.544: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:07.544: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:07.544: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:07.544: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:07.544: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:07.544: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:09.545: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:09.638: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:09.638: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:09.716: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:09.716: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:09.825: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:09.825: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:09.903: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:09.904: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:09.985: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:09.985: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:10.063: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:10.063: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:10.141: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:10.141: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:10.220: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:10.220: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:10.298: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:10.298: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:10.376: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:10.376: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:10.455: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:10.455: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:10.534: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:10.534: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:10.613: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:10.613: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:10.691: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:10.691: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:10.770: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:10.770: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:11.544: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:11.623: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:11.623: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:11.701: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:11.701: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:11.780: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:11.780: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:11.859: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:11.859: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:11.938: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:11.938: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:12.016: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:12.016: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:12.095: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:12.095: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:12.174: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:12.174: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:12.253: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:12.253: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:12.331: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:12.331: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:12.410: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:12.410: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:12.489: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:12.489: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:12.567: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:12.568: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:12.646: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:12.646: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:12.725: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:12.725: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:13.545: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:13.626: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:13.626: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:13.705: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:13.705: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:13.783: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:13.783: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:13.862: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:13.862: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:13.941: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:13.941: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:14.020: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:14.020: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:14.098: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:14.098: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:14.177: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:14.177: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:14.255: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:14.255: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:14.334: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:14.334: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:14.413: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:14.413: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:14.492: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:14.492: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:14.570: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:14.570: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:14.659: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:14.659: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:14.738: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:14.738: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:15.544: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:15.623: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:15.623: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:15.701: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:15.701: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:15.783: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:15.783: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:15.863: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:15.863: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:15.941: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:15.941: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:16.020: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:16.020: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:16.101: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:16.101: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:16.180: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:16.181: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:16.259: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:16.259: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:16.338: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:16.338: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:16.416: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:16.416: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:16.495: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:16.496: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:16.574: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:16.574: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:16.653: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:16.653: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:16.732: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:16.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:17.544: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:17.623: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:17.623: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:17.701: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:17.701: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:17.780: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:17.780: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:17.858: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:17.858: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:17.937: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:17.937: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:18.015: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:18.015: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:18.094: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:18.094: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:18.172: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:18.172: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:18.250: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:18.250: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:18.329: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:18.329: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:18.407: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:18.407: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:18.486: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:18.486: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:18.564: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:18.565: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:18.643: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:18.643: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:18.722: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:18.722: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:19.544: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:19.623: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:19.623: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:19.703: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:19.703: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:19.783: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:19.783: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:19.867: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:19.867: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:19.946: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:19.946: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:20.024: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:20.024: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:20.103: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:20.103: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:20.181: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:20.181: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:20.260: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:20.260: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:20.338: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:20.338: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:20.417: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:20.417: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:20.496: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:20.496: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:20.574: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:20.574: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:20.653: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:20.653: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:20.732: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:20.732: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:21.544: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:21.623: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:21.623: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:21.702: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:21.702: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:21.780: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:21.780: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:21.859: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:21.859: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:21.937: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:21.937: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:22.015: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:22.015: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:22.094: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:22.094: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:22.173: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:22.173: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:22.251: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:22.251: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:22.330: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:22.330: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:22.408: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:22.408: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:22.487: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:22.487: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:22.566: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:22.566: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:22.644: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:22.644: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:22.723: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:22.723: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:23.545: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:23.623: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:23.623: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:23.702: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:23.702: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:23.781: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:23.781: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:23.860: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:23.860: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:23.938: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:23.938: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:24.017: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:24.017: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:24.095: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:24.095: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:24.174: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:24.174: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:24.253: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:24.253: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:24.332: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:24.332: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:24.411: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:24.411: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:24.490: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:24.490: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:24.570: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:24.570: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:24.649: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:24.649: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:24.728: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:24.728: INFO: Received response from host: affinity-lb-esipp-transition-s4hx2 Nov 25 18:11:25.544: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:27.545: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:27.545: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:29.546: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:29.546: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:31.547: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:31.547: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:33.548: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:33.548: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:35.549: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:35.549: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:37.549: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:37.549: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:39.550: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:39.550: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:41.551: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:41.551: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:43.552: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:43.552: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:45.553: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:45.553: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:47.554: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:47.554: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:49.554: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:49.554: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:51.554: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:51.554: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:53.555: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:53.555: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:55.556: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:57.544: INFO: Poking "http://34.127.117.201:80" Nov 25 18:11:59.545: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:11:59.545: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:01.545: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:01.545: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:03.546: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:03.546: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:05.547: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:05.547: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:07.548: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:07.548: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:09.548: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:09.548: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:11.549: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:11.549: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:11.590: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:11.590: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:11.629: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:11.629: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:13.629: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:13.629: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:13.670: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:13.670: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:15.671: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:15.671: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:15.710: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:15.710: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:15.749: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:15.749: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:15.789: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:17.544: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:19.544: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:19.544: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:21.545: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:21.545: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:21.586: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:21.586: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:23.587: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:23.587: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:23.626: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:23.626: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:23.667: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:23.667: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:25.668: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:25.668: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:25.707: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:25.707: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:25.746: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:25.747: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:27.747: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:27.747: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:27.786: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:27.787: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:27.826: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:27.826: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:29.827: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:29.827: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:29.867: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:29.867: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:31.867: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:33.544: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:33.584: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:33.584: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:33.623: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:33.623: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:35.623: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:35.623: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:37.624: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:37.624: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:37.665: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:37.665: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:39.665: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:39.665: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:39.705: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:39.705: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:41.706: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:41.706: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:41.746: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:41.746: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:41.786: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:41.786: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:41.825: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:41.825: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:41.865: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:41.865: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:41.904: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:41.904: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:43.905: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:43.905: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:45.905: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:47.545: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:47.585: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:47.585: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:47.624: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:47.624: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:47.664: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:47.664: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:47.704: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:47.704: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:49.705: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:49.705: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:51.706: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:51.706: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:51.746: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:51.746: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:53.747: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:53.747: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:55.747: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:55.747: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:55.787: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:55.787: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:57.787: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:57.787: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:57.826: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:57.826: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:59.827: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 18:12:59.827: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:59.907: INFO: Poke("http://34.127.117.201:80"): success Nov 25 18:12:59.907: INFO: Poking "http://34.127.117.201:80" Nov 25 18:12:59.946: INFO: Poke("http://34.127.117.201:80"): Get "http://34.127.117.201:80": dial tcp 34.127.117.201:80: connect: connection refused Nov 25 18:12:59.946: INFO: Received response from host: affinity-lb-esipp-transition-9mwqw Nov 25 18:12:59.986: INFO: Unexpected error: <*errors.errorString | 0xc0015b3450>: { s: "failed to get Service \"affinity-lb-esipp-transition\": Get \"https://35.233.152.153/api/v1/namespaces/loadbalancers-9355/services/affinity-lb-esipp-transition\": dial tcp 35.233.152.153:443: connect: connection refused", } Nov 25 18:12:59.986: FAIL: failed to get Service "affinity-lb-esipp-transition": Get "https://35.233.152.153/api/v1/namespaces/loadbalancers-9355/services/affinity-lb-esipp-transition": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x7638a85?, {0x801de88, 0xc0034a6000}, 0x0, 0x1) test/e2e/network/service.go:4006 +0x4a5 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithTransition(...) test/e2e/network/service.go:3962 k8s.io/kubernetes/test/e2e/network.glob..func19.9() test/e2e/network/loadbalancer.go:787 +0xf3 Nov 25 18:13:00.025: INFO: [pod,node] pairs: []; err: Get "https://35.233.152.153/api/v1/namespaces/loadbalancers-9355/pods": dial tcp 35.233.152.153:443: connect: connection refused STEP: deleting ReplicationController affinity-lb-esipp-transition in namespace loadbalancers-9355, will wait for the garbage collector to delete the pods 11/25/22 18:13:00.025 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 18:13:00.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 18:13:00.105: INFO: Output of kubectl describe svc: Nov 25 18:13:00.105: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-9355 describe svc --namespace=loadbalancers-9355' Nov 25 18:13:00.243: INFO: rc: 1 Nov 25 18:13:00.243: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:13:00.243 STEP: Collecting events from namespace "loadbalancers-9355". 11/25/22 18:13:00.243 Nov 25 18:13:00.283: INFO: Unexpected error: failed to list events in namespace "loadbalancers-9355": <*url.Error | 0xc003878ea0>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/loadbalancers-9355/events", Err: <*net.OpError | 0xc004450d20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002a7e780>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003605bc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:13:00.283: FAIL: failed to list events in namespace "loadbalancers-9355": Get "https://35.233.152.153/api/v1/namespaces/loadbalancers-9355/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0019f25c0, {0xc00056ff20, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0034a6000}, {0xc00056ff20, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0019f2650?, {0xc00056ff20?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0011e84b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00172e6e0?, 0xc0028acf50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00172e6e0?, 0x7fadfa0?}, {0xae73300?, 0xc0028acf80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-9355" for this suite. 11/25/22 18:13:00.284 Nov 25 18:13:00.323: FAIL: Couldn't delete ns: "loadbalancers-9355": Delete "https://35.233.152.153/api/v1/namespaces/loadbalancers-9355": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/loadbalancers-9355", Err:(*net.OpError)(0xc00347d6d0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0011e84b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00172e600?, 0xc0042cbfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00172e600?, 0x0?}, {0xae73300?, 0x4?, 0xc000945de8?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shandle\sload\sbalancer\scleanup\sfinalizer\sfor\sservice\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011584b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:13:32.857 Nov 25 18:13:32.857: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 18:13:32.859 Nov 25 18:13:32.898: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:34.938: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:36.938: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:38.939: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:40.938: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:42.938: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:44.939: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:46.938: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:48.939: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:50.938: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:52.938: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:54.938: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:56.938: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:13:58.938: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:00.938: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:02.939: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:02.978: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:14:02.978: INFO: Unexpected error: <*errors.errorString | 0xc0001fda10>: { s: "timed out waiting for the condition", } Nov 25 18:14:02.978: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011584b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 18:14:02.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:14:03.018 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shave\ssession\saffinity\swork\sfor\sLoadBalancer\sservice\swith\sESIPP\soff\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011ca4b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:04:28.949 Nov 25 18:04:28.949: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 18:04:28.951 Nov 25 18:04:28.990: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:31.031: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:33.030: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:35.031: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:37.031: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:39.030: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:41.031: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:43.030: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:45.031: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:47.031: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:49.031: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:51.031: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:53.031: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:55.031: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:57.030: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:59.031: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:59.070: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:59.070: INFO: Unexpected error: <*errors.errorString | 0xc000289c50>: { s: "timed out waiting for the condition", } Nov 25 18:04:59.070: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011ca4b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 18:04:59.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:04:59.111 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shave\ssession\saffinity\swork\sfor\sLoadBalancer\sservice\swith\sESIPP\son\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000b344b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 17:57:54.008 Nov 25 17:57:54.008: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 17:57:54.009 Nov 25 17:57:54.049: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:57:56.088: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:57:58.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:00.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:02.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:04.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:06.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:08.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:10.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:12.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:14.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:16.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:18.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:20.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:22.088: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:24.089: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:24.128: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:58:24.129: INFO: Unexpected error: <*errors.errorString | 0xc0001fda30>: { s: "timed out waiting for the condition", } Nov 25 17:58:24.129: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000b344b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 17:58:24.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 17:58:24.17 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\supdate\snodePort\:\shttp\s\[Slow\]$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0015022a0, {0x75c6f7c, 0x9}, 0xc00337be90) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0015022a0, 0x7fc4e89c7bc0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0015022a0, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0013045a0, {0xc0042c2f20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.13() test/e2e/network/networking.go:364 +0x51 There were additional failures detected after the initial failure: [FAILED] Nov 25 18:17:52.373: failed to list events in namespace "nettest-81": Get "https://35.233.152.153/api/v1/namespaces/nettest-81/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:17:52.413: Couldn't delete ns: "nettest-81": Delete "https://35.233.152.153/api/v1/namespaces/nettest-81": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/nettest-81", Err:(*net.OpError)(0xc000a63a40)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] Networking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:15:01.246 Nov 25 18:15:01.246: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename nettest 11/25/22 18:15:01.248 Nov 25 18:15:01.287: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:16:15.423 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:16:15.505 [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 [It] should update nodePort: http [Slow] test/e2e/network/networking.go:363 STEP: Performing setup for networking test in namespace nettest-81 11/25/22 18:16:17.922 STEP: creating a selector 11/25/22 18:16:17.923 STEP: Creating the service pods in kubernetes 11/25/22 18:16:17.923 Nov 25 18:16:17.923: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 18:16:18.190: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "nettest-81" to be "running and ready" Nov 25 18:16:18.253: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 62.349653ms Nov 25 18:16:18.253: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:20.305: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115035649s Nov 25 18:16:20.305: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:22.349: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158821787s Nov 25 18:16:22.349: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:24.334: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143269046s Nov 25 18:16:24.334: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:26.321: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130680059s Nov 25 18:16:26.321: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:28.325: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.135003147s Nov 25 18:16:28.325: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:30.314: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.123792065s Nov 25 18:16:30.314: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:32.308: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.117544583s Nov 25 18:16:32.308: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:34.354: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.164179496s Nov 25 18:16:34.354: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:36.313: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.122344123s Nov 25 18:16:36.313: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:38.304: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.113350151s Nov 25 18:16:38.304: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:40.309: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.11824789s Nov 25 18:16:40.309: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:42.305: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.114851965s Nov 25 18:16:42.305: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:44.312: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.121892249s Nov 25 18:16:44.312: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:46.320: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.130016971s Nov 25 18:16:46.320: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:48.310: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.119618934s Nov 25 18:16:48.310: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:50.305: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.115099218s Nov 25 18:16:50.305: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:52.319: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.128835549s Nov 25 18:16:52.319: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:54.304: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.114148403s Nov 25 18:16:54.304: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:56.333: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 38.142602222s Nov 25 18:16:56.333: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:16:58.306: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 40.116115792s Nov 25 18:16:58.306: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:00.337: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 42.146702488s Nov 25 18:17:00.337: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:02.302: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 44.112167071s Nov 25 18:17:02.302: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:04.344: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 46.153915598s Nov 25 18:17:04.344: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:06.325: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 48.134511897s Nov 25 18:17:06.325: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:08.332: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 50.142104265s Nov 25 18:17:08.332: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:10.339: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 52.148807224s Nov 25 18:17:10.339: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:12.299: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 54.108283182s Nov 25 18:17:12.299: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:14.350: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 56.159650955s Nov 25 18:17:14.350: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:16.314: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 58.123549294s Nov 25 18:17:16.314: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:18.327: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.136924112s Nov 25 18:17:18.327: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:20.338: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.147396977s Nov 25 18:17:20.338: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:22.308: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.117255494s Nov 25 18:17:22.308: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:24.306: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.115784138s Nov 25 18:17:24.306: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:26.300: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.10941521s Nov 25 18:17:26.300: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:28.332: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.14208601s Nov 25 18:17:28.332: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:30.303: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.112247275s Nov 25 18:17:30.303: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:32.300: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.110087425s Nov 25 18:17:32.300: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:34.298: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.107371179s Nov 25 18:17:34.298: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:36.296: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.106185132s Nov 25 18:17:36.296: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:38.311: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.121084102s Nov 25 18:17:38.311: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:40.323: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.132412156s Nov 25 18:17:40.323: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:42.320: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.129309017s Nov 25 18:17:42.320: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:44.317: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.126588638s Nov 25 18:17:44.317: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:46.295: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.104796869s Nov 25 18:17:46.295: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:48.376: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.18580599s Nov 25 18:17:48.376: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:50.318: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.127545546s Nov 25 18:17:50.318: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:17:52.292: INFO: Encountered non-retryable error while getting pod nettest-81/netserver-0: Get "https://35.233.152.153/api/v1/namespaces/nettest-81/pods/netserver-0": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:17:52.293: INFO: Unexpected error: <*fmt.wrapError | 0xc0011f4b20>: { msg: "error while waiting for pod nettest-81/netserver-0 to be running and ready: Get \"https://35.233.152.153/api/v1/namespaces/nettest-81/pods/netserver-0\": dial tcp 35.233.152.153:443: connect: connection refused", err: <*url.Error | 0xc001a1ad80>{ Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/nettest-81/pods/netserver-0", Err: <*net.OpError | 0xc002d79180>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004209ad0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0011f4aa0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 25 18:17:52.293: FAIL: error while waiting for pod nettest-81/netserver-0 to be running and ready: Get "https://35.233.152.153/api/v1/namespaces/nettest-81/pods/netserver-0": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0015022a0, {0x75c6f7c, 0x9}, 0xc00337be90) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0015022a0, 0x7fc4e89c7bc0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0015022a0, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0013045a0, {0xc0042c2f20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.13() test/e2e/network/networking.go:364 +0x51 [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 Nov 25 18:17:52.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:17:52.333 STEP: Collecting events from namespace "nettest-81". 11/25/22 18:17:52.333 Nov 25 18:17:52.372: INFO: Unexpected error: failed to list events in namespace "nettest-81": <*url.Error | 0xc001a1b290>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/nettest-81/events", Err: <*net.OpError | 0xc002d79360>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0029ba0f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0011f4ea0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:17:52.373: FAIL: failed to list events in namespace "nettest-81": Get "https://35.233.152.153/api/v1/namespaces/nettest-81/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0011685c0, {0xc0038d9490, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc004c929c0}, {0xc0038d9490, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001168650?, {0xc0038d9490?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0013045a0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00037be10?, 0xc0000cefb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc002d40d88?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00037be10?, 0x29449fc?}, {0xae73300?, 0xc0000cef80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 STEP: Destroying namespace "nettest-81" for this suite. 11/25/22 18:17:52.373 Nov 25 18:17:52.413: FAIL: Couldn't delete ns: "nettest-81": Delete "https://35.233.152.153/api/v1/namespaces/nettest-81": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/nettest-81", Err:(*net.OpError)(0xc000a63a40)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0013045a0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00037bcf0?, 0xc001b09fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00037bcf0?, 0x0?}, {0xae73300?, 0x5?, 0xc003603788?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\supdate\snodePort\:\sudp\s\[Slow\]$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc004a30380, {0x75c6f7c, 0x9}, 0xc004b496e0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc004a30380, 0x7fa8985c0a38?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc004a30380, 0x3e?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0012005a0, {0xc004842f20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.15() test/e2e/network/networking.go:395 +0x51 There were additional failures detected after the initial failure: [FAILED] Nov 25 18:13:00.059: failed to list events in namespace "nettest-4145": Get "https://35.233.152.153/api/v1/namespaces/nettest-4145/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:13:00.099: Couldn't delete ns: "nettest-4145": Delete "https://35.233.152.153/api/v1/namespaces/nettest-4145": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/nettest-4145", Err:(*net.OpError)(0xc004fba4b0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] Networking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:12:20.999 Nov 25 18:12:20.999: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename nettest 11/25/22 18:12:21.002 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:12:21.18 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:12:21.268 [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 [It] should update nodePort: udp [Slow] test/e2e/network/networking.go:394 STEP: Performing setup for networking test in namespace nettest-4145 11/25/22 18:12:21.365 STEP: creating a selector 11/25/22 18:12:21.365 STEP: Creating the service pods in kubernetes 11/25/22 18:12:21.366 Nov 25 18:12:21.366: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 18:12:21.862: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "nettest-4145" to be "running and ready" Nov 25 18:12:21.939: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 76.432258ms Nov 25 18:12:21.939: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:12:24.012: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.149710364s Nov 25 18:12:24.012: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:25.988: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.12546965s Nov 25 18:12:25.988: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:27.995: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.132665496s Nov 25 18:12:27.995: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:30.089: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.226788067s Nov 25 18:12:30.089: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:32.014: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.151970253s Nov 25 18:12:32.014: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:34.007: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.144700266s Nov 25 18:12:34.007: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:35.995: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.132994331s Nov 25 18:12:35.995: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:38.037: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.174517245s Nov 25 18:12:38.037: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:40.005: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.142744839s Nov 25 18:12:40.005: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:42.014: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.151740799s Nov 25 18:12:42.014: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:44.024: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.162004835s Nov 25 18:12:44.024: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:45.997: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.134886402s Nov 25 18:12:45.997: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:48.011: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.148456188s Nov 25 18:12:48.011: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:50.074: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.211247456s Nov 25 18:12:50.074: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:51.993: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.130670111s Nov 25 18:12:51.993: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:54.120: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.257718999s Nov 25 18:12:54.120: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:55.988: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.125210279s Nov 25 18:12:55.988: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:57.998: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.135418707s Nov 25 18:12:57.998: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:12:59.979: INFO: Encountered non-retryable error while getting pod nettest-4145/netserver-0: Get "https://35.233.152.153/api/v1/namespaces/nettest-4145/pods/netserver-0": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:12:59.979: INFO: Unexpected error: <*fmt.wrapError | 0xc004ea75e0>: { msg: "error while waiting for pod nettest-4145/netserver-0 to be running and ready: Get \"https://35.233.152.153/api/v1/namespaces/nettest-4145/pods/netserver-0\": dial tcp 35.233.152.153:443: connect: connection refused", err: <*url.Error | 0xc004eff5f0>{ Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/nettest-4145/pods/netserver-0", Err: <*net.OpError | 0xc004fba050>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004dd4cf0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004ea75a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 25 18:12:59.979: FAIL: error while waiting for pod nettest-4145/netserver-0 to be running and ready: Get "https://35.233.152.153/api/v1/namespaces/nettest-4145/pods/netserver-0": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc004a30380, {0x75c6f7c, 0x9}, 0xc004b496e0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc004a30380, 0x7fa8985c0a38?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc004a30380, 0x3e?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0012005a0, {0xc004842f20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.15() test/e2e/network/networking.go:395 +0x51 [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 Nov 25 18:12:59.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:13:00.019 STEP: Collecting events from namespace "nettest-4145". 11/25/22 18:13:00.019 Nov 25 18:13:00.059: INFO: Unexpected error: failed to list events in namespace "nettest-4145": <*url.Error | 0xc004e797a0>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/nettest-4145/events", Err: <*net.OpError | 0xc004f28280>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004effef0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004f2c020>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:13:00.059: FAIL: failed to list events in namespace "nettest-4145": Get "https://35.233.152.153/api/v1/namespaces/nettest-4145/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0046525c0, {0xc004c7b250, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc003e0c9c0}, {0xc004c7b250, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc004652650?, {0xc004c7b250?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0012005a0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0046fd600?, 0xc00483ffb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc004284be8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0046fd600?, 0x29449fc?}, {0xae73300?, 0xc00483ff80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 STEP: Destroying namespace "nettest-4145" for this suite. 11/25/22 18:13:00.059 Nov 25 18:13:00.099: FAIL: Couldn't delete ns: "nettest-4145": Delete "https://35.233.152.153/api/v1/namespaces/nettest-4145": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/nettest-4145", Err:(*net.OpError)(0xc004fba4b0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0012005a0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0046fd580?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0046fd580?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sServices\sGCE\s\[Slow\]\sshould\sbe\sable\sto\screate\sand\stear\sdown\sa\sstandard\-tier\sload\sbalancer\s\[Slow\]$'
test/e2e/network/network_tiers.go:76 k8s.io/kubernetes/test/e2e/network.glob..func21.3() test/e2e/network/network_tiers.go:76 +0x106 There were additional failures detected after the initial failure: [FAILED] Nov 25 17:57:55.456: failed to list events in namespace "services-5618": Get "https://35.233.152.153/api/v1/namespaces/services-5618/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 17:57:55.496: Couldn't delete ns: "services-5618": Delete "https://35.233.152.153/api/v1/namespaces/services-5618": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/services-5618", Err:(*net.OpError)(0xc00181fbd0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] Services GCE [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 17:57:28.58 Nov 25 17:57:28.580: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename services 11/25/22 17:57:28.582 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 17:57:28.72 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 17:57:28.807 [BeforeEach] [sig-network] Services GCE [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] Services GCE [Slow] test/e2e/network/network_tiers.go:49 [It] should be able to create and tear down a standard-tier load balancer [Slow] test/e2e/network/network_tiers.go:66 STEP: creating a pod to be part of the service net-tiers-svc 11/25/22 17:57:28.996 Nov 25 17:57:29.053: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 17:57:29.127: INFO: Found 0/1 pods - will retry Nov 25 17:57:31.168: INFO: Found all 1 pods Nov 25 17:57:31.168: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [net-tiers-svc-7xgzq] Nov 25 17:57:31.168: INFO: Waiting up to 2m0s for pod "net-tiers-svc-7xgzq" in namespace "services-5618" to be "running and ready" Nov 25 17:57:31.210: INFO: Pod "net-tiers-svc-7xgzq": Phase="Pending", Reason="", readiness=false. Elapsed: 41.248434ms Nov 25 17:57:31.210: INFO: Error evaluating pod condition running and ready: want pod 'net-tiers-svc-7xgzq' on 'bootstrap-e2e-minion-group-4mzt' to be 'Running' but was 'Pending' Nov 25 17:57:33.253: INFO: Pod "net-tiers-svc-7xgzq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084600302s Nov 25 17:57:33.253: INFO: Error evaluating pod condition running and ready: want pod 'net-tiers-svc-7xgzq' on 'bootstrap-e2e-minion-group-4mzt' to be 'Running' but was 'Pending' Nov 25 17:57:35.251: INFO: Pod "net-tiers-svc-7xgzq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082879452s Nov 25 17:57:35.251: INFO: Error evaluating pod condition running and ready: want pod 'net-tiers-svc-7xgzq' on 'bootstrap-e2e-minion-group-4mzt' to be 'Running' but was 'Pending' Nov 25 17:57:37.251: INFO: Pod "net-tiers-svc-7xgzq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082987923s Nov 25 17:57:37.251: INFO: Error evaluating pod condition running and ready: want pod 'net-tiers-svc-7xgzq' on 'bootstrap-e2e-minion-group-4mzt' to be 'Running' but was 'Pending' Nov 25 17:57:39.253: INFO: Pod "net-tiers-svc-7xgzq": Phase="Running", Reason="", readiness=false. Elapsed: 8.084263391s Nov 25 17:57:39.253: INFO: Error evaluating pod condition running and ready: pod 'net-tiers-svc-7xgzq' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:41.252: INFO: Pod "net-tiers-svc-7xgzq": Phase="Running", Reason="", readiness=false. Elapsed: 10.083240782s Nov 25 17:57:41.252: INFO: Error evaluating pod condition running and ready: pod 'net-tiers-svc-7xgzq' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:43.257: INFO: Pod "net-tiers-svc-7xgzq": Phase="Running", Reason="", readiness=false. Elapsed: 12.088065313s Nov 25 17:57:43.257: INFO: Error evaluating pod condition running and ready: pod 'net-tiers-svc-7xgzq' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:45.251: INFO: Pod "net-tiers-svc-7xgzq": Phase="Running", Reason="", readiness=false. Elapsed: 14.082877047s Nov 25 17:57:45.251: INFO: Error evaluating pod condition running and ready: pod 'net-tiers-svc-7xgzq' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:47.254: INFO: Pod "net-tiers-svc-7xgzq": Phase="Running", Reason="", readiness=false. Elapsed: 16.085107496s Nov 25 17:57:47.254: INFO: Error evaluating pod condition running and ready: pod 'net-tiers-svc-7xgzq' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:49.252: INFO: Pod "net-tiers-svc-7xgzq": Phase="Running", Reason="", readiness=false. Elapsed: 18.083914532s Nov 25 17:57:49.252: INFO: Error evaluating pod condition running and ready: pod 'net-tiers-svc-7xgzq' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:51.253: INFO: Pod "net-tiers-svc-7xgzq": Phase="Running", Reason="", readiness=false. Elapsed: 20.084851444s Nov 25 17:57:51.253: INFO: Error evaluating pod condition running and ready: pod 'net-tiers-svc-7xgzq' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:53.254: INFO: Pod "net-tiers-svc-7xgzq": Phase="Running", Reason="", readiness=false. Elapsed: 22.085823106s Nov 25 17:57:53.254: INFO: Error evaluating pod condition running and ready: pod 'net-tiers-svc-7xgzq' on 'bootstrap-e2e-minion-group-4mzt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 17:57:29 +0000 UTC }] Nov 25 17:57:55.250: INFO: Encountered non-retryable error while getting pod services-5618/net-tiers-svc-7xgzq: Get "https://35.233.152.153/api/v1/namespaces/services-5618/pods/net-tiers-svc-7xgzq": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 17:57:55.250: INFO: Pod net-tiers-svc-7xgzq failed to be running and ready. Nov 25 17:57:55.250: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [net-tiers-svc-7xgzq] Nov 25 17:57:55.250: INFO: Unexpected error: <*errors.errorString | 0xc000fe04d0>: { s: "failed waiting for pods to be running: timeout waiting for 1 pods to be ready", } Nov 25 17:57:55.250: FAIL: failed waiting for pods to be running: timeout waiting for 1 pods to be ready Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func21.3() test/e2e/network/network_tiers.go:76 +0x106 [AfterEach] [sig-network] Services GCE [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 17:57:55.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] Services GCE [Slow] test/e2e/network/network_tiers.go:55 Nov 25 17:57:55.290: INFO: Output of kubectl describe svc: Nov 25 17:57:55.290: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.152.153 --kubeconfig=/workspace/.kube/config --namespace=services-5618 describe svc --namespace=services-5618' Nov 25 17:57:55.416: INFO: rc: 1 Nov 25 17:57:55.416: INFO: [DeferCleanup (Each)] [sig-network] Services GCE [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Services GCE [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 17:57:55.416 STEP: Collecting events from namespace "services-5618". 11/25/22 17:57:55.416 Nov 25 17:57:55.456: INFO: Unexpected error: failed to list events in namespace "services-5618": <*url.Error | 0xc001c65650>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/services-5618/events", Err: <*net.OpError | 0xc001a0a910>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0010b5620>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000fe45e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 17:57:55.456: FAIL: failed to list events in namespace "services-5618": Get "https://35.233.152.153/api/v1/namespaces/services-5618/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc002f345c0, {0xc00413a070, 0xd}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc001c5a680}, {0xc00413a070, 0xd}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc002f34650?, {0xc00413a070?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000ffa0f0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000eacae0?, 0xc001e0bf50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000eacae0?, 0x7fadfa0?}, {0xae73300?, 0xc001e0bf80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] Services GCE [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "services-5618" for this suite. 11/25/22 17:57:55.457 Nov 25 17:57:55.496: FAIL: Couldn't delete ns: "services-5618": Delete "https://35.233.152.153/api/v1/namespaces/services-5618": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/services-5618", Err:(*net.OpError)(0xc00181fbd0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000ffa0f0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000eac9a0?, 0xc001e0cfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000eac9a0?, 0x0?}, {0xae73300?, 0x5?, 0xc00021d110?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sPods\sshould\scap\sback\-off\sat\sMaxContainerBackOff\s\[Slow\]\[NodeConformance\]$'
test/e2e/common/node/pods.go:129 k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc000e44f60, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:129 +0x225 k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 +0x37f There were additional failures detected after the initial failure: [FAILED] Nov 25 18:17:51.941: failed to list events in namespace "pods-6322": Get "https://35.233.152.153/api/v1/namespaces/pods-6322/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:17:51.981: Couldn't delete ns: "pods-6322": Delete "https://35.233.152.153/api/v1/namespaces/pods-6322": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/pods-6322", Err:(*net.OpError)(0xc00169ba90)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-node] Pods set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:07:27.86 Nov 25 18:07:27.860: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename pods 11/25/22 18:07:27.862 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:07:28.294 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:07:28.413 [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:194 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] test/e2e/common/node/pods.go:717 Nov 25 18:07:28.773: INFO: Waiting up to 5m0s for pod "back-off-cap" in namespace "pods-6322" to be "running and ready" Nov 25 18:07:28.910: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 136.971444ms Nov 25 18:07:28.910: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:07:30.966: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193264445s Nov 25 18:07:30.966: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:07:32.984: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211397129s Nov 25 18:07:32.984: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:07:34.975: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201897451s Nov 25 18:07:34.975: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:07:36.969: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195922277s Nov 25 18:07:36.969: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:07:39.004: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 10.230907041s Nov 25 18:07:39.004: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:07:40.974: INFO: Pod "back-off-cap": Phase="Running", Reason="", readiness=true. Elapsed: 12.200795306s Nov 25 18:07:40.974: INFO: The phase of Pod back-off-cap is Running (Ready = true) Nov 25 18:07:40.974: INFO: Pod "back-off-cap" satisfied condition "running and ready" ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 5m0.714s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 5m0.001s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 6 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 5m20.716s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 5m20.003s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 6 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 5m40.723s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 5m40.009s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 6 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 6m0.725s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 6m0.011s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 7 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 6m20.727s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 6m20.014s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 7 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 6m40.729s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 6m40.016s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 7 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 7m0.731s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 7m0.018s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 8 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 7m20.733s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 7m20.02s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 8 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 7m40.735s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 7m40.021s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 8 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 8m0.736s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 8m0.023s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 9 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 8m20.738s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 8m20.025s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 9 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 8m40.74s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 8m40.027s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 9 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 9m0.743s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 9m0.029s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 10 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 9m20.745s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 9m20.032s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 10 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 9m40.747s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 9m40.033s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 10 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 10m0.749s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 10m0.036s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 2459 [sleep, 11 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ STEP: getting restart delay when capped 11/25/22 18:17:41.023 ------------------------------ Progress Report for Ginkgo Process #19 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 10m20.754s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 10m20.041s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 7.591s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 2459 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc000e44f60, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000a85380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 18:17:51.861: INFO: Unexpected error: getting pod back-off-cap: <*url.Error | 0xc00502f140>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/pods-6322/pods/back-off-cap", Err: <*net.OpError | 0xc004963360>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004cb70b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0004adb60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:17:51.861: FAIL: getting pod back-off-cap: Get "https://35.233.152.153/api/v1/namespaces/pods-6322/pods/back-off-cap": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc000e44f60, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:129 +0x225 k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 +0x37f [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 Nov 25 18:17:51.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:17:51.901 STEP: Collecting events from namespace "pods-6322". 11/25/22 18:17:51.901 Nov 25 18:17:51.941: INFO: Unexpected error: failed to list events in namespace "pods-6322": <*url.Error | 0xc00502f5c0>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/pods-6322/events", Err: <*net.OpError | 0xc0049637c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004cb7620>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0004adec0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:17:51.941: FAIL: failed to list events in namespace "pods-6322": Get "https://35.233.152.153/api/v1/namespaces/pods-6322/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0011645c0, {0xc004869d50, 0x9}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc004bfba00}, {0xc004869d50, 0x9}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001164650?, {0xc004869d50?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0003efe00) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001692250?, 0xc003a6bfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0005e3748?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001692250?, 0x29449fc?}, {0xae73300?, 0xc003a6bf80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 STEP: Destroying namespace "pods-6322" for this suite. 11/25/22 18:17:51.942 Nov 25 18:17:51.981: FAIL: Couldn't delete ns: "pods-6322": Delete "https://35.233.152.153/api/v1/namespaces/pods-6322": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/pods-6322", Err:(*net.OpError)(0xc00169ba90)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0003efe00) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001692160?, 0xc000a9ffb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001692160?, 0x0?}, {0xae73300?, 0x5?, 0xc001177020?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sVariable\sExpansion\sshould\sverify\sthat\sa\sfailing\ssubpath\sexpansion\scan\sbe\smodified\sduring\sthe\slifecycle\sof\sa\scontainer\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/pod/pod_client.go:134 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).Update(0xc0013f0df8?, {0xc00183cb80?, 0x32?}, 0x78958b0?) test/e2e/framework/pod/pod_client.go:134 +0xd5 k8s.io/kubernetes/test/e2e/common/node.glob..func7.7() test/e2e/common/node/expansion.go:272 +0x3e6 There were additional failures detected after the initial failure: [FAILED] Nov 25 17:57:55.746: failed to list events in namespace "var-expansion-6248": Get "https://35.233.152.153/api/v1/namespaces/var-expansion-6248/events": dial tcp 35.233.152.153:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 17:57:55.786: Couldn't delete ns: "var-expansion-6248": Delete "https://35.233.152.153/api/v1/namespaces/var-expansion-6248": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/var-expansion-6248", Err:(*net.OpError)(0xc004244d70)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 17:57:28.614 Nov 25 17:57:28.614: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename var-expansion 11/25/22 17:57:28.617 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 17:57:28.786 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 17:57:28.882 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:225 STEP: creating the pod with failed condition 11/25/22 17:57:28.967 Nov 25 17:57:29.028: INFO: Waiting up to 2m0s for pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0" in namespace "var-expansion-6248" to be "running" Nov 25 17:57:29.086: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 57.965418ms Nov 25 17:57:31.128: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100158207s Nov 25 17:57:33.132: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103670821s Nov 25 17:57:35.134: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106118151s Nov 25 17:57:37.176: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147945955s Nov 25 17:57:39.133: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.104332746s Nov 25 17:57:41.132: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.104214098s Nov 25 17:57:43.130: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.101750191s Nov 25 17:57:45.129: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.100609668s Nov 25 17:57:47.133: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.104676214s Nov 25 17:57:49.130: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.101830156s Nov 25 17:57:51.142: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.114157319s Nov 25 17:57:53.128: INFO: Pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.100146363s Nov 25 17:57:55.126: INFO: Encountered non-retryable error while getting pod var-expansion-6248/var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0: Get "https://35.233.152.153/api/v1/namespaces/var-expansion-6248/pods/var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": dial tcp 35.233.152.153:443: connect: connection refused STEP: updating the pod 11/25/22 17:57:55.126 Nov 25 17:57:55.666: INFO: Unexpected error: <*errors.errorString | 0xc00135a920>: { s: "failed to get pod \"var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0\": Get \"https://35.233.152.153/api/v1/namespaces/var-expansion-6248/pods/var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0\": dial tcp 35.233.152.153:443: connect: connection refused", } Nov 25 17:57:55.666: FAIL: failed to get pod "var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": Get "https://35.233.152.153/api/v1/namespaces/var-expansion-6248/pods/var-expansion-8a7f6c4b-be53-4aca-9835-eb7aff31dad0": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).Update(0xc0013f0df8?, {0xc00183cb80?, 0x32?}, 0x78958b0?) test/e2e/framework/pod/pod_client.go:134 +0xd5 k8s.io/kubernetes/test/e2e/common/node.glob..func7.7() test/e2e/common/node/expansion.go:272 +0x3e6 [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 25 17:57:55.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 17:57:55.706 STEP: Collecting events from namespace "var-expansion-6248". 11/25/22 17:57:55.706 Nov 25 17:57:55.746: INFO: Unexpected error: failed to list events in namespace "var-expansion-6248": <*url.Error | 0xc001b747e0>: { Op: "Get", URL: "https://35.233.152.153/api/v1/namespaces/var-expansion-6248/events", Err: <*net.OpError | 0xc001af4fa0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001c52570>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 152, 153], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0002d3bc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 17:57:55.746: FAIL: failed to list events in namespace "var-expansion-6248": Get "https://35.233.152.153/api/v1/namespaces/var-expansion-6248/events": dial tcp 35.233.152.153:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0040725c0, {0xc000ccb8c0, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0002e3d40}, {0xc000ccb8c0, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc004072650?, {0xc000ccb8c0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0003f7590) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001345a90?, 0x13?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001345a90?, 0x29449fc?}, {0xae73300?, 0xc0000b9780?, 0xc00425c558?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 STEP: Destroying namespace "var-expansion-6248" for this suite. 11/25/22 17:57:55.746 Nov 25 17:57:55.786: FAIL: Couldn't delete ns: "var-expansion-6248": Delete "https://35.233.152.153/api/v1/namespaces/var-expansion-6248": dial tcp 35.233.152.153:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.152.153/api/v1/namespaces/var-expansion-6248", Err:(*net.OpError)(0xc004244d70)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0003f7590) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0013459d0?, 0x13?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0013459d0?, 0x7fe0bc8?}, {0xae73300?, 0x1000000039ed2e0?, 0xc00199a0f0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\saccess\sto\stwo\svolumes\swith\sdifferent\svolume\smode\sand\sretain\sdata\sacross\spod\srecreation\son\sthe\ssame\snode$'
test/e2e/framework/debug/dump.go:44 k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0012b5da0, {0xc00323bba0, 0x10}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc00420bba0}, {0xc00323bba0, 0x10}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001ca6480?, {0xc00323bba0?, 0x3?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:341 +0x82d k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d273b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00175ab00?, 0xc003e51f08?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xb3e1b0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00175ab00?, 0x0?}, {0xae73300?, 0x0?, 0xc000164a28?}) /usr/local/go/src/reflect/value.go:368 +0xbc
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:04:32.186 Nov 25 18:04:32.186: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename multivolume 11/25/22 18:04:32.188 Nov 25 18:04:32.228: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:34.268: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:36.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:38.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:40.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:42.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:44.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:46.268: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:48.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:50.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:52.268: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:54.268: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:56.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused Nov 25 18:04:58.267: INFO: Unexpected error while creating namespace: Post "https://35.233.152.153/api/v1/namespaces": dial tcp 35.233.152.153:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:06:05.039 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:06:05.132 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] test/e2e/framework/metrics/init/init.go:31 [It] should access to two volumes with different volume mode and retain data across pod recreation on the same node test/e2e/storage/testsuites/multivolume.go:206 STEP: Building a driver namespace object, basename multivolume-9968 11/25/22 18:06:05.217 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:06:05.377 STEP: deploying csi-hostpath driver 11/25/22 18:06:05.46 Nov 25 18:06:05.691: INFO: creating *v1.ServiceAccount: multivolume-9968-5943/csi-attacher Nov 25 18:06:05.734: INFO: creating *v1.ClusterRole: external-attacher-runner-multivolume-9968 Nov 25 18:06:05.734: INFO: Define cluster role external-attacher-runner-multivolume-9968 Nov 25 18:06:05.777: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-multivolume-9968 Nov 25 18:06:05.821: INFO: creating *v1.Role: multivolume-9968-5943/external-attacher-cfg-multivolume-9968 Nov 25 18:06:05.871: INFO: creating *v1.RoleBinding: multivolume-9968-5943/csi-attacher-role-cfg Nov 25 18:06:05.915: INFO: creating *v1.ServiceAccount: multivolume-9968-5943/csi-provisioner Nov 25 18:06:05.967: INFO: creating *v1.ClusterRole: external-provisioner-runner-multivolume-9968 Nov 25 18:06:05.967: INFO: Define cluster role external-provisioner-runner-multivolume-9968 Nov 25 18:06:06.013: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-multivolume-9968 Nov 25 18:06:06.148: INFO: creating *v1.Role: multivolume-9968-5943/external-provisioner-cfg-multivolume-9968 Nov 25 18:06:06.240: INFO: creating *v1.RoleBinding: multivolume-9968-5943/csi-provisioner-role-cfg Nov 25 18:06:06.383: INFO: creating *v1.ServiceAccount: multivolume-9968-5943/csi-snapshotter Nov 25 18:06:06.431: INFO: creating *v1.ClusterRole: external-snapshotter-runner-multivolume-9968 Nov 25 18:06:06.431: INFO: Define cluster role external-snapshotter-runner-multivolume-9968 Nov 25 18:06:06.486: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-multivolume-9968 Nov 25 18:06:06.528: INFO: creating *v1.Role: multivolume-9968-5943/external-snapshotter-leaderelection-multivolume-9968 Nov 25 18:06:06.573: INFO: creating *v1.RoleBinding: multivolume-9968-5943/external-snapshotter-leaderelection Nov 25 18:06:06.618: INFO: creating *v1.ServiceAccount: multivolume-9968-5943/csi-external-health-monitor-controller Nov 25 18:06:06.662: INFO: creating *v1.ClusterRole: external-health-monitor-controller-runner-multivolume-9968 Nov 25 18:06:06.662: INFO: Define cluster role external-health-monitor-controller-runner-multivolume-9968 Nov 25 18:06:06.726: INFO: creating *v1.ClusterRoleBinding: csi-external-health-monitor-controller-role-multivolume-9968 Nov 25 18:06:06.775: INFO: creating *v1.Role: multivolume-9968-5943/external-health-monitor-controller-cfg-multivolume-9968 Nov 25 18:06:06.821: INFO: creating *v1.RoleBinding: multivolume-9968-5943/csi-external-health-monitor-controller-role-cfg Nov 25 18:06:06.865: INFO: creating *v1.ServiceAccount: multivolume-9968-5943/csi-resizer Nov 25 18:06:06.907: INFO: creating *v1.ClusterRole: external-resizer-runner-multivolume-9968 Nov 25 18:06:06.907: INFO: Define cluster role external-resizer-runner-multivolume-9968 Nov 25 18:06:06.952: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-multivolume-9968 Nov 25 18:06:07.065: INFO: creating *v1.Role: multivolume-9968-5943/external-resizer-cfg-multivolume-9968 Nov 25 18:06:07.114: INFO: creating *v1.RoleBinding: multivolume-9968-5943/csi-resizer-role-cfg Nov 25 18:06:07.160: INFO: creating *v1.CSIDriver: csi-hostpath-multivolume-9968 Nov 25 18:06:07.205: INFO: creating *v1.ServiceAccount: multivolume-9968-5943/csi-hostpathplugin-sa Nov 25 18:06:07.256: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-attacher-cluster-role-multivolume-9968 Nov 25 18:06:07.300: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-health-monitor-controller-cluster-role-multivolume-9968 Nov 25 18:06:07.343: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-provisioner-cluster-role-multivolume-9968 Nov 25 18:06:07.387: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-resizer-cluster-role-multivolume-9968 Nov 25 18:06:07.432: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-snapshotter-cluster-role-multivolume-9968 Nov 25 18:06:07.482: INFO: creating *v1.RoleBinding: multivolume-9968-5943/csi-hostpathplugin-attacher-role Nov 25 18:06:07.526: INFO: creating *v1.RoleBinding: multivolume-9968-5943/csi-hostpathplugin-health-monitor-controller-role Nov 25 18:06:07.571: INFO: creating *v1.RoleBinding: multivolume-9968-5943/csi-hostpathplugin-provisioner-role Nov 25 18:06:07.615: INFO: creating *v1.RoleBinding: multivolume-9968-5943/csi-hostpathplugin-resizer-role Nov 25 18:06:07.660: INFO: creating *v1.RoleBinding: multivolume-9968-5943/csi-hostpathplugin-snapshotter-role Nov 25 18:06:07.708: INFO: creating *v1.StatefulSet: multivolume-9968-5943/csi-hostpathplugin Nov 25 18:06:07.756: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-multivolume-9968 Nov 25 18:06:07.840: INFO: Creating resource for dynamic PV Nov 25 18:06:07.840: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(csi-hostpath) supported size:{ 1Mi} STEP: creating a StorageClass multivolume-9968qbqdn 11/25/22 18:06:07.84 STEP: creating a claim 11/25/22 18:06:07.938 Nov 25 18:06:07.996: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathtzf88] to have phase Bound Nov 25 18:06:08.071: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:10.127: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:12.179: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:14.247: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:16.312: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:18.393: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:20.475: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:22.529: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:24.615: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:26.686: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:28.760: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:30.836: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:33.048: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:35.135: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:37.205: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:39.293: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:41.348: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:43.491: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:45.553: INFO: PersistentVolumeClaim csi-hostpathtzf88 found but phase is Pending instead of Bound. Nov 25 18:06:47.618: INFO: PersistentVolumeClaim csi-hostpathtzf88 found and phase=Bound (39.621499212s) Nov 25 18:06:47.795: INFO: Creating resource for dynamic PV Nov 25 18:06:47.795: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(csi-hostpath) supported size:{ 1Mi} STEP: creating a StorageClass multivolume-99686g7rl 11/25/22 18:06:47.795 STEP: creating a claim 11/25/22 18:06:47.859 Nov 25 18:06:47.938: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathsc4jb] to have phase Bound Nov 25 18:06:48.015: INFO: PersistentVolumeClaim csi-hostpathsc4jb found but phase is Pending instead of Bound. Nov 25 18:06:50.073: INFO: PersistentVolumeClaim csi-hostpathsc4jb found and phase=Bound (2.134998748s) STEP: Creating pod on {Name:bootstrap-e2e-minion-group-4mzt Selector:map[] Affinity:nil} with multiple volumes 11/25/22 18:06:50.176 Nov 25 18:06:50.244: INFO: Waiting up to 5m0s for pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214" in namespace "multivolume-9968" to be "running" Nov 25 18:06:50.309: INFO: Pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214": Phase="Pending", Reason="", readiness=false. Elapsed: 65.393931ms Nov 25 18:06:52.361: INFO: Pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116817605s Nov 25 18:06:54.431: INFO: Pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18707945s Nov 25 18:06:56.376: INFO: Pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132453212s Nov 25 18:06:58.472: INFO: Pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228223872s Nov 25 18:07:00.371: INFO: Pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214": Phase="Pending", Reason="", readiness=false. Elapsed: 10.127658984s Nov 25 18:07:02.382: INFO: Pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214": Phase="Pending", Reason="", readiness=false. Elapsed: 12.137966419s Nov 25 18:07:04.373: INFO: Pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214": Phase="Pending", Reason="", readiness=false. Elapsed: 14.129129766s Nov 25 18:07:06.392: INFO: Pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214": Phase="Running", Reason="", readiness=true. Elapsed: 16.147975797s Nov 25 18:07:06.392: INFO: Pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214" satisfied condition "running" STEP: Checking if the volume1 exists as expected volume mode (Block) 11/25/22 18:07:06.488 Nov 25 18:07:06.488: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume1] Namespace:multivolume-9968 PodName:pod-341a432f-5f4b-45b5-b3d8-ab2eae142214 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:06.488: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:06.490: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:06.490: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-341a432f-5f4b-45b5-b3d8-ab2eae142214/exec?command=%2Fbin%2Fsh&command=-c&command=test+-b+%2Fmnt%2Fvolume1&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:07.039: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume1] Namespace:multivolume-9968 PodName:pod-341a432f-5f4b-45b5-b3d8-ab2eae142214 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:07.039: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:07.041: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:07.041: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-341a432f-5f4b-45b5-b3d8-ab2eae142214/exec?command=%2Fbin%2Fsh&command=-c&command=test+-d+%2Fmnt%2Fvolume1&container=write-pod&container=write-pod&stderr=true&stdout=true) STEP: Checking if write to the volume1 works properly 11/25/22 18:07:07.634 Nov 25 18:07:07.634: INFO: ExecWithOptions {Command:[/bin/sh -c echo 8Zy4n/OTq5QFlPj2JOzIxcm57Cxq+8Xea0gVXYUftN+c6b3ZJEG9EcSLENQkiyIZF0a4kuCGiIHfflvb2EG0fQ== | base64 -d | sha256sum] Namespace:multivolume-9968 PodName:pod-341a432f-5f4b-45b5-b3d8-ab2eae142214 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:07.634: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:07.635: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:07.635: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-341a432f-5f4b-45b5-b3d8-ab2eae142214/exec?command=%2Fbin%2Fsh&command=-c&command=echo+8Zy4n%2FOTq5QFlPj2JOzIxcm57Cxq%2B8Xea0gVXYUftN%2Bc6b3ZJEG9EcSLENQkiyIZF0a4kuCGiIHfflvb2EG0fQ%3D%3D+%7C+base64+-d+%7C+sha256sum&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:08.263: INFO: ExecWithOptions {Command:[/bin/sh -c echo 8Zy4n/OTq5QFlPj2JOzIxcm57Cxq+8Xea0gVXYUftN+c6b3ZJEG9EcSLENQkiyIZF0a4kuCGiIHfflvb2EG0fQ== | base64 -d | dd of=/mnt/volume1 bs=64 count=1] Namespace:multivolume-9968 PodName:pod-341a432f-5f4b-45b5-b3d8-ab2eae142214 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:08.263: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:08.264: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:08.264: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-341a432f-5f4b-45b5-b3d8-ab2eae142214/exec?command=%2Fbin%2Fsh&command=-c&command=echo+8Zy4n%2FOTq5QFlPj2JOzIxcm57Cxq%2B8Xea0gVXYUftN%2Bc6b3ZJEG9EcSLENQkiyIZF0a4kuCGiIHfflvb2EG0fQ%3D%3D+%7C+base64+-d+%7C+dd+of%3D%2Fmnt%2Fvolume1++bs%3D64+count%3D1&container=write-pod&container=write-pod&stderr=true&stdout=true) STEP: Checking if read from the volume1 works properly 11/25/22 18:07:08.967 Nov 25 18:07:08.967: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1 bs=64 count=1 | sha256sum] Namespace:multivolume-9968 PodName:pod-341a432f-5f4b-45b5-b3d8-ab2eae142214 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:08.967: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:08.969: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:08.969: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-341a432f-5f4b-45b5-b3d8-ab2eae142214/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume1++bs%3D64+count%3D1+%7C+sha256sum&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:10.085: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1 bs=64 count=1 | sha256sum | grep -Fq 7f2a6c99971bf6ec2001649aba7422fa983f0d71d85f906431c2cd1b47133778] Namespace:multivolume-9968 PodName:pod-341a432f-5f4b-45b5-b3d8-ab2eae142214 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:10.085: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:10.087: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:10.087: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-341a432f-5f4b-45b5-b3d8-ab2eae142214/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume1++bs%3D64+count%3D1+%7C+sha256sum+%7C+grep+-Fq+7f2a6c99971bf6ec2001649aba7422fa983f0d71d85f906431c2cd1b47133778&container=write-pod&container=write-pod&stderr=true&stdout=true) STEP: Checking if the volume2 exists as expected volume mode (Filesystem) 11/25/22 18:07:10.774 Nov 25 18:07:10.775: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume2] Namespace:multivolume-9968 PodName:pod-341a432f-5f4b-45b5-b3d8-ab2eae142214 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:10.775: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:10.776: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:10.776: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-341a432f-5f4b-45b5-b3d8-ab2eae142214/exec?command=%2Fbin%2Fsh&command=-c&command=test+-d+%2Fmnt%2Fvolume2&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:11.220: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume2] Namespace:multivolume-9968 PodName:pod-341a432f-5f4b-45b5-b3d8-ab2eae142214 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:11.220: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:11.222: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:11.222: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-341a432f-5f4b-45b5-b3d8-ab2eae142214/exec?command=%2Fbin%2Fsh&command=-c&command=test+-b+%2Fmnt%2Fvolume2&container=write-pod&container=write-pod&stderr=true&stdout=true) STEP: Checking if write to the volume2 works properly 11/25/22 18:07:11.726 Nov 25 18:07:11.727: INFO: ExecWithOptions {Command:[/bin/sh -c echo zh0+1KSVqLwAOaEaI4j9X2a+UQX/WMSu44ivev/T7FA3y3m87I3FYrNean5t1Xs4zArver1fLBlcgu/WB3FlnA== | base64 -d | sha256sum] Namespace:multivolume-9968 PodName:pod-341a432f-5f4b-45b5-b3d8-ab2eae142214 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:11.727: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:11.728: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:11.728: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-341a432f-5f4b-45b5-b3d8-ab2eae142214/exec?command=%2Fbin%2Fsh&command=-c&command=echo+zh0%2B1KSVqLwAOaEaI4j9X2a%2BUQX%2FWMSu44ivev%2FT7FA3y3m87I3FYrNean5t1Xs4zArver1fLBlcgu%2FWB3FlnA%3D%3D+%7C+base64+-d+%7C+sha256sum&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:12.367: INFO: ExecWithOptions {Command:[/bin/sh -c echo zh0+1KSVqLwAOaEaI4j9X2a+UQX/WMSu44ivev/T7FA3y3m87I3FYrNean5t1Xs4zArver1fLBlcgu/WB3FlnA== | base64 -d | dd of=/mnt/volume2/file1.txt bs=64 count=1] Namespace:multivolume-9968 PodName:pod-341a432f-5f4b-45b5-b3d8-ab2eae142214 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:12.367: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:12.369: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:12.369: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-341a432f-5f4b-45b5-b3d8-ab2eae142214/exec?command=%2Fbin%2Fsh&command=-c&command=echo+zh0%2B1KSVqLwAOaEaI4j9X2a%2BUQX%2FWMSu44ivev%2FT7FA3y3m87I3FYrNean5t1Xs4zArver1fLBlcgu%2FWB3FlnA%3D%3D+%7C+base64+-d+%7C+dd+of%3D%2Fmnt%2Fvolume2%2Ffile1.txt++bs%3D64+count%3D1&container=write-pod&container=write-pod&stderr=true&stdout=true) STEP: Checking if read from the volume2 works properly 11/25/22 18:07:12.927 Nov 25 18:07:12.927: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum] Namespace:multivolume-9968 PodName:pod-341a432f-5f4b-45b5-b3d8-ab2eae142214 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:12.927: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:12.929: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:12.929: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-341a432f-5f4b-45b5-b3d8-ab2eae142214/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume2%2Ffile1.txt++bs%3D64+count%3D1+%7C+sha256sum&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:13.570: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum | grep -Fq a4e961a65dbe24880b6d7ca03e7143eb851bdf5039908601da610c951c5c7600] Namespace:multivolume-9968 PodName:pod-341a432f-5f4b-45b5-b3d8-ab2eae142214 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:13.570: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:13.571: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:13.571: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-341a432f-5f4b-45b5-b3d8-ab2eae142214/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume2%2Ffile1.txt++bs%3D64+count%3D1+%7C+sha256sum+%7C+grep+-Fq+a4e961a65dbe24880b6d7ca03e7143eb851bdf5039908601da610c951c5c7600&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:14.253: INFO: Deleting pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214" in namespace "multivolume-9968" Nov 25 18:07:14.353: INFO: Wait up to 5m0s for pod "pod-341a432f-5f4b-45b5-b3d8-ab2eae142214" to be fully deleted STEP: Creating pod on {Name:bootstrap-e2e-minion-group-4mzt Selector:map[] Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:In,Values:[bootstrap-e2e-minion-group-4mzt],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,}} with multiple volumes 11/25/22 18:07:16.512 Nov 25 18:07:16.643: INFO: Waiting up to 5m0s for pod "pod-28575165-053a-42ef-b115-f46efb39ea68" in namespace "multivolume-9968" to be "running" Nov 25 18:07:16.701: INFO: Pod "pod-28575165-053a-42ef-b115-f46efb39ea68": Phase="Pending", Reason="", readiness=false. Elapsed: 58.06153ms Nov 25 18:07:18.794: INFO: Pod "pod-28575165-053a-42ef-b115-f46efb39ea68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151557536s Nov 25 18:07:20.786: INFO: Pod "pod-28575165-053a-42ef-b115-f46efb39ea68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14282492s Nov 25 18:07:22.753: INFO: Pod "pod-28575165-053a-42ef-b115-f46efb39ea68": Phase="Running", Reason="", readiness=true. Elapsed: 6.110222707s Nov 25 18:07:22.753: INFO: Pod "pod-28575165-053a-42ef-b115-f46efb39ea68" satisfied condition "running" STEP: Checking if the volume1 exists as expected volume mode (Block) 11/25/22 18:07:22.811 Nov 25 18:07:22.811: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume1] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:22.811: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:22.813: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:22.813: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=test+-b+%2Fmnt%2Fvolume1&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:23.349: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume1] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:23.349: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:23.353: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:23.353: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=test+-d+%2Fmnt%2Fvolume1&container=write-pod&container=write-pod&stderr=true&stdout=true) STEP: Checking if read from the volume1 works properly 11/25/22 18:07:23.797 Nov 25 18:07:23.797: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1 bs=64 count=1 | sha256sum] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:23.797: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:23.798: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:23.798: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume1++bs%3D64+count%3D1+%7C+sha256sum&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:24.594: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1 bs=64 count=1 | sha256sum | grep -Fq 7f2a6c99971bf6ec2001649aba7422fa983f0d71d85f906431c2cd1b47133778] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:24.594: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:24.595: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:24.596: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume1++bs%3D64+count%3D1+%7C+sha256sum+%7C+grep+-Fq+7f2a6c99971bf6ec2001649aba7422fa983f0d71d85f906431c2cd1b47133778&container=write-pod&container=write-pod&stderr=true&stdout=true) STEP: Checking if write to the volume1 works properly 11/25/22 18:07:25.386 Nov 25 18:07:25.386: INFO: ExecWithOptions {Command:[/bin/sh -c echo Ewb6grq0hEl1Z0UjEXsqS6Dr3xh01NBqoitIeK1+EgLnttOr0KlIyOlWdnc9qAXa73uzLkpYNpCcpURjR0opWw== | base64 -d | sha256sum] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:25.386: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:25.387: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:25.388: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=echo+Ewb6grq0hEl1Z0UjEXsqS6Dr3xh01NBqoitIeK1%2BEgLnttOr0KlIyOlWdnc9qAXa73uzLkpYNpCcpURjR0opWw%3D%3D+%7C+base64+-d+%7C+sha256sum&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:25.837: INFO: ExecWithOptions {Command:[/bin/sh -c echo Ewb6grq0hEl1Z0UjEXsqS6Dr3xh01NBqoitIeK1+EgLnttOr0KlIyOlWdnc9qAXa73uzLkpYNpCcpURjR0opWw== | base64 -d | dd of=/mnt/volume1 bs=64 count=1] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:25.837: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:25.839: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:25.839: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=echo+Ewb6grq0hEl1Z0UjEXsqS6Dr3xh01NBqoitIeK1%2BEgLnttOr0KlIyOlWdnc9qAXa73uzLkpYNpCcpURjR0opWw%3D%3D+%7C+base64+-d+%7C+dd+of%3D%2Fmnt%2Fvolume1++bs%3D64+count%3D1&container=write-pod&container=write-pod&stderr=true&stdout=true) STEP: Checking if read from the volume1 works properly 11/25/22 18:07:26.375 Nov 25 18:07:26.375: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1 bs=64 count=1 | sha256sum] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:26.375: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:26.377: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:26.377: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume1++bs%3D64+count%3D1+%7C+sha256sum&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:26.861: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1 bs=64 count=1 | sha256sum | grep -Fq 5905ed936e02d8e62919184f61c5ca943eefd549a8ada45f3aacebec4b063c8b] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:26.861: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:26.862: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:26.862: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume1++bs%3D64+count%3D1+%7C+sha256sum+%7C+grep+-Fq+5905ed936e02d8e62919184f61c5ca943eefd549a8ada45f3aacebec4b063c8b&container=write-pod&container=write-pod&stderr=true&stdout=true) STEP: Checking if the volume2 exists as expected volume mode (Filesystem) 11/25/22 18:07:27.52 Nov 25 18:07:27.520: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume2] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:27.520: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:27.522: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:27.523: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=test+-d+%2Fmnt%2Fvolume2&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:28.112: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume2] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:28.112: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:28.113: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:28.113: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=test+-b+%2Fmnt%2Fvolume2&container=write-pod&container=write-pod&stderr=true&stdout=true) STEP: Checking if read from the volume2 works properly 11/25/22 18:07:28.951 Nov 25 18:07:28.952: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:28.952: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:28.953: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:28.953: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume2%2Ffile1.txt++bs%3D64+count%3D1+%7C+sha256sum&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:29.702: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum | grep -Fq a4e961a65dbe24880b6d7ca03e7143eb851bdf5039908601da610c951c5c7600] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:29.702: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:29.703: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:29.703: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume2%2Ffile1.txt++bs%3D64+count%3D1+%7C+sha256sum+%7C+grep+-Fq+a4e961a65dbe24880b6d7ca03e7143eb851bdf5039908601da610c951c5c7600&container=write-pod&container=write-pod&stderr=true&stdout=true) STEP: Checking if write to the volume2 works properly 11/25/22 18:07:30.127 Nov 25 18:07:30.127: INFO: ExecWithOptions {Command:[/bin/sh -c echo 8bKfy7/WBWBpiN60G+5i5YiX7q0zUbuv14lt0xazVgtfdqev0Ei22E8fWdnkFl9m7Rc297Y4ry8ao6jsiL1jmg== | base64 -d | sha256sum] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:30.127: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:30.129: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:30.129: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=echo+8bKfy7%2FWBWBpiN60G%2B5i5YiX7q0zUbuv14lt0xazVgtfdqev0Ei22E8fWdnkFl9m7Rc297Y4ry8ao6jsiL1jmg%3D%3D+%7C+base64+-d+%7C+sha256sum&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:30.682: INFO: ExecWithOptions {Command:[/bin/sh -c echo 8bKfy7/WBWBpiN60G+5i5YiX7q0zUbuv14lt0xazVgtfdqev0Ei22E8fWdnkFl9m7Rc297Y4ry8ao6jsiL1jmg== | base64 -d | dd of=/mnt/volume2/file1.txt bs=64 count=1] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:30.682: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:30.683: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:30.683: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=echo+8bKfy7%2FWBWBpiN60G%2B5i5YiX7q0zUbuv14lt0xazVgtfdqev0Ei22E8fWdnkFl9m7Rc297Y4ry8ao6jsiL1jmg%3D%3D+%7C+base64+-d+%7C+dd+of%3D%2Fmnt%2Fvolume2%2Ffile1.txt++bs%3D64+count%3D1&container=write-pod&container=write-pod&stderr=true&stdout=true) STEP: Checking if read from the volume2 works properly 11/25/22 18:07:31.234 Nov 25 18:07:31.235: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:31.235: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:31.236: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:31.236: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume2%2Ffile1.txt++bs%3D64+count%3D1+%7C+sha256sum&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:31.703: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum | grep -Fq 9120893f02007298f204caf67b78ca06d357dfc73508959dedf6ac9c81802527] Namespace:multivolume-9968 PodName:pod-28575165-053a-42ef-b115-f46efb39ea68 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 18:07:31.703: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 18:07:31.705: INFO: ExecWithOptions: Clientset creation Nov 25 18:07:31.705: INFO: ExecWithOptions: execute(POST https://35.233.152.153/api/v1/namespaces/multivolume-9968/pods/pod-28575165-053a-42ef-b115-f46efb39ea68/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume2%2Ffile1.txt++bs%3D64+count%3D1+%7C+sha256sum+%7C+grep+-Fq+9120893f02007298f204caf67b78ca06d357dfc73508959dedf6ac9c81802527&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 18:07:32.390: INFO: Deleting pod "pod-28575165-053a-42ef-b115-f46efb39ea68" in namespace "multivolume-9968" Nov 25 18:07:32.477: INFO: Wait up to 5m0s for pod "pod-28575165-053a-42ef-b115-f46efb39ea68" to be fully deleted STEP: Deleting pvc 11/25/22 18:07:34.614 Nov 25 18:07:34.614: INFO: Deleting PersistentVolumeClaim "csi-hostpathtzf88" Nov 25 18:07:34.711: INFO: Waiting up to 5m0s for PersistentVolume pvc-f21df293-dc24-4028-b83a-e918c5165fee to get deleted Nov 25 18:07:34.782: INFO: PersistentVolume pvc-f21df293-dc24-4028-b83a-e918c5165fee found and phase=Bound (70.672556ms) Nov 25 18:07:40.083: INFO: PersistentVolume pvc-f21df293-dc24-4028-b83a-e918c5165fee was removed STEP: Deleting sc 11/25/22 18:07:40.083 STEP: Deleting pvc 11/25/22 18:07:40.209 Nov 25 18:07:40.209: INFO: Deleting PersistentVolumeClaim "csi-hostpathsc4jb" Nov 25 18:07:40.286: INFO: Waiting up to 5m0s for PersistentVolume pvc-e0f768d0-9e32-4678-8491-ae9f49837893 to get deleted Nov 25 18:07:40.347: INFO: PersistentVolume pvc-e0f768d0-9e32-4678-8491-ae9f49837893 found and phase=Bound (61.267248ms) Nov 25 18:07:45.426: INFO: PersistentVolume pvc-e0f768d0-9e32-4678-8491-ae9f49837893 was removed STEP: Deleting sc 11/25/22 18:07:45.426 [AfterEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 18:07:45.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] test/e2e/storage/drivers/csi.go:289 STEP: deleting the test namespace: multivolume-9968 11/25/22 18:07:45.572 STEP: Waiting for namespaces [multivolume-9968] to vanish 11/25/22 18:07:45.629 ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node (Spec Runtime: 8m13.387s) test/e2e/storage/testsuites/multivolume.go:206 In [DeferCleanup (Each)] (Node Runtime: 5m0.001s) test/e2e/storage/drivers/csi.go:289 At [By Step] Waiting for namespaces [multivolume-9968] to vanish (Step Runtime: 4m59.944s) test/e2e/framework/util.go:241 Spec Goroutine goroutine 2363 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0005bed38, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x739a380?, 0xc00128a580?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x6ab2720?, 0xc0021710e0?, 0xc00323bba0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework.WaitForNamespacesDeleted({0x801de88?, 0xc00420bba0}, {0xc0010f5db0, 0x1, 0x0?}, 0x0?) test/e2e/framework/util.go:247 > k8s.io/kubernetes/test/e2e/framework.(*Framework).DeleteNamespace.func1() test/e2e/framework/framework.go:395 > k8s.io/kubernetes/test/e2e/framework.(*Framework).DeleteNamespace(0xc000d273b0?, {0xc00323bba0?, 0x10?}) test/e2e/framework/framework.go:415 > k8s.io/kubernetes/test/e2e/storage/drivers.generateDriverCleanupFunc.func1.1() test/e2e/storage/drivers/csi.go:1007 > k8s.io/kubernetes/test/e2e/storage/drivers.tryFunc(0xc003658c60?) test/e2e/storage/drivers/csi.go:992 > k8s.io/kubernetes/test/e2e/storage/drivers.generateDriverCleanupFunc.func1() test/e2e/storage/drivers/csi.go:1007 reflect.Value.call({0x6627cc0?, 0xc001db18b0?, 0xc003e1bf08?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00197cab0?}) /usr/local/go/src/reflect/value.go:584 reflect.Value.Call({0x6627cc0?, 0xc001db18b0?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.NewCleanupNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/node.go:571 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00197ca80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 1590 [select, 1 minutes] > k8s.io/kubernetes/test/e2e/storage/podlogs.CopyPodLogs.func1() test/e2e/storage/podlogs/podlogs.go:246 > k8s.io/kubernetes/test/e2e/storage/podlogs.CopyPodLogs test/e2e/storage/podlogs/podlogs.go:101 goroutine 1637 [select, 1 minutes] > k8s.io/kubernetes/test/e2e/storage/podlogs.WatchPods.func3() test/e2e/storage/podlogs/podlogs.go:304 > k8s.io/kubernetes/test/e2e/storage/podlogs.WatchPods test/e2e/storage/podlogs/podlogs.go:294 ------------------------------ Nov 25 18:12:45.763: INFO: error deleting namespace multivolume-9968: timed out waiting for the condition STEP: uninstalling csi csi-hostpath driver 11/25/22 18:12:45.763 Nov 25 18:12:45.763: INFO: deleting *v1.ServiceAccount: multivolume-9968-5943/csi-attacher Nov 25 18:12:45.921: INFO: deleting *v1.ClusterRole: external-attacher-runner-multivolume-9968 Nov 25 18:12:45.987: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-multivolume-9968 Nov 25 18:12:46.065: INFO: deleting *v1.Role: multivolume-9968-5943/external-attacher-cfg-multivolume-9968 Nov 25 18:12:46.127: INFO: deleting *v1.RoleBinding: multivolume-9968-5943/csi-attacher-role-cfg Nov 25 18:12:46.200: INFO: deleting *v1.ServiceAccount: multivolume-9968-5943/csi-provisioner Nov 25 18:12:46.254: INFO: deleting *v1.ClusterRole: external-provisioner-runner-multivolume-9968 Nov 25 18:12:46.311: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-multivolume-9968 Nov 25 18:12:46.376: INFO: deleting *v1.Role: multivolume-9968-5943/external-provisioner-cfg-multivolume-9968 Nov 25 18:12:46.439: INFO: deleting *v1.RoleBinding: multivolume-9968-5943/csi-provisioner-role-cfg Nov 25 18:12:46.504: INFO: deleting *v1.ServiceAccount: multivolume-9968-5943/csi-snapshotter Nov 25 18:12:46.572: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-multivolume-9968 Nov 25 18:12:46.658: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-multivolume-9968 Nov 25 18:12:46.729: INFO: deleting *v1.Role: multivolume-9968-5943/external-snapshotter-leaderelection-multivolume-9968 Nov 25 18:12:46.797: INFO: deleting *v1.RoleBinding: multivolume-9968-5943/external-snapshotter-leaderelection Nov 25 18:12:46.877: INFO: deleting *v1.ServiceAccount: multivolume-9968-5943/csi-external-health-monitor-controller Nov 25 18:12:46.946: INFO: deleting *v1.ClusterRole: external-health-monitor-controller-runner-multivolume-9968 Nov 25 18:12:47.001: INFO: deleting *v1.ClusterRoleBinding: csi-external-health-monitor-controller-role-multivolume-9968 Nov 25 18:12:47.059: INFO: deleting *v1.Role: multivolume-9968-5943/external-health-monitor-controller-cfg-multivolume-9968 Nov 25 18:12:47.133: INFO: deleting *v1.RoleBinding: multivolume-9968-5943/csi-external-health-monitor-controller-role-cfg Nov 25 18:12:47.227: INFO: deleting *v1.ServiceAccount: multivolume-9968-5943/csi-resizer Nov 25 18:12:47.403: INFO: deleting *v1.ClusterRole: external-resizer-runner-multivolume-9968 Nov 25 18:12:47.520: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-multivolume-9968 Nov 25 18:12:47.636: INFO: deleting *v1.Role: multivolume-9968-5943/external-resizer-cfg-multivolume-9968 Nov 25 18:12:47.716: INFO: deleting *v1.RoleBinding: multivolume-9968-5943/csi-resizer-role-cfg Nov 25 18:12:47.809: INFO: deleting *v1.CSIDriver: csi-hostpath-multivolume-9968 Nov 25 18:12:47.924: INFO: deleting *v1.ServiceAccount: multivolume-9968-5943/csi-hostpathplugin-sa Nov 25 18:12:47.995: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-attacher-cluster-role-multivolume-9968 Nov 25 18:12:48.078: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-health-monitor-controller-cluster-role-multivolume-9968 Nov 25 18:12:48.153: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-provisioner-cluster-role-multivolume-9968 Nov 25 18:12:48.256: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-resizer-cluster-role-multivolume-9968 Nov 25 18:12:48.358: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-snapshotter-cluster-role-multivolume-9968 Nov 25 18:12:48.496: INFO: deleting *v1.RoleB