go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sServers\swith\ssupport\sfor\sAPI\schunking\sshould\ssupport\scontinue\slisting\sfrom\sthe\slast\skey\sif\sthe\soriginal\sversion\shas\sbeen\scompacted\saway\,\sthough\sthe\slist\sis\sinconsistent\s\[Slow\]$'
test/e2e/apimachinery/chunking.go:177 k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc There were additional failures detected after the initial failure: [FAILED] Nov 26 04:16:44.745: failed to list events in namespace "chunking-4477": Get "https://35.230.67.129/api/v1/namespaces/chunking-4477/events": dial tcp 35.230.67.129:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 04:16:44.785: Couldn't delete ns: "chunking-4477": Delete "https://35.230.67.129/api/v1/namespaces/chunking-4477": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/chunking-4477", Err:(*net.OpError)(0xc0047670e0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-api-machinery] Servers with support for API chunking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:13:51.418 Nov 26 04:13:51.418: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename chunking 11/26/22 04:13:51.423 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:14:46.602 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:14:46.725 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/apimachinery/chunking.go:51 STEP: creating a large number of resources 11/26/22 04:14:46.829 [It] should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] test/e2e/apimachinery/chunking.go:126 STEP: retrieving the first page 11/26/22 04:15:04.453 Nov 26 04:15:04.585: INFO: Retrieved 40/40 results with rv 4635 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NDYzNSwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 STEP: retrieving the second page until the token expires 11/26/22 04:15:04.585 Nov 26 04:15:24.631: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NDYzNSwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 04:15:44.630: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NDYzNSwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 04:16:04.660: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NDYzNSwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 04:16:24.682: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NDYzNSwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet STEP: retrieving the second page again with the token received with the error message 11/26/22 04:16:44.626 Nov 26 04:16:44.665: INFO: Unexpected error: failed to list pod templates in namespace: chunking-4477, given inconsistent continue token and limit: 40: <*url.Error | 0xc00482a6f0>: { Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/chunking-4477/podtemplates?limit=40", Err: <*net.OpError | 0xc004766eb0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00482a6c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000458ac0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:16:44.665: FAIL: failed to list pod templates in namespace: chunking-4477, given inconsistent continue token and limit: 40: Get "https://35.230.67.129/api/v1/namespaces/chunking-4477/podtemplates?limit=40": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc [AfterEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/node/init/init.go:32 Nov 26 04:16:44.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:16:44.705 STEP: Collecting events from namespace "chunking-4477". 11/26/22 04:16:44.705 Nov 26 04:16:44.745: INFO: Unexpected error: failed to list events in namespace "chunking-4477": <*url.Error | 0xc003afcae0>: { Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/chunking-4477/events", Err: <*net.OpError | 0xc003598960>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002d34a20>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00474a6e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:16:44.745: FAIL: failed to list events in namespace "chunking-4477": Get "https://35.230.67.129/api/v1/namespaces/chunking-4477/events": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0040ee5c0, {0xc003502860, 0xd}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0032dab60}, {0xc003502860, 0xd}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0040ee650?, {0xc003502860?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0009b4780) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0031adab0?, 0xc004333fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0031d10c8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0031adab0?, 0x29449fc?}, {0xae73300?, 0xc004333f80?, 0xc00348c6c0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking tear down framework | framework.go:193 STEP: Destroying namespace "chunking-4477" for this suite. 11/26/22 04:16:44.745 Nov 26 04:16:44.785: FAIL: Couldn't delete ns: "chunking-4477": Delete "https://35.230.67.129/api/v1/namespaces/chunking-4477": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/chunking-4477", Err:(*net.OpError)(0xc0047670e0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0009b4780) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0031ada30?, 0x3?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0031ada30?, 0x1?}, {0xae73300?, 0x3664c86?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\sjobs\swhen\ssuspended\s\[Slow\]\s\[Conformance\]$'
test/e2e/apps/cronjob.go:111 k8s.io/kubernetes/test/e2e/apps.glob..func2.2() test/e2e/apps/cronjob.go:111 +0x376 There were additional failures detected after the initial failure: [FAILED] Nov 26 04:07:00.824: failed to list events in namespace "cronjob-4130": Get "https://35.230.67.129/api/v1/namespaces/cronjob-4130/events": dial tcp 35.230.67.129:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 04:07:00.864: Couldn't delete ns: "cronjob-4130": Delete "https://35.230.67.129/api/v1/namespaces/cronjob-4130": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/cronjob-4130", Err:(*net.OpError)(0xc002c2eb40)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:06:42.311 Nov 26 04:06:42.311: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/26/22 04:06:42.313 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:06:42.439 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:06:42.527 [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/apps/cronjob.go:96 STEP: Creating a suspended cronjob 11/26/22 04:06:42.611 STEP: Ensuring no jobs are scheduled 11/26/22 04:06:42.664 STEP: Ensuring no job exists by listing jobs explicitly 11/26/22 04:07:00.704 Nov 26 04:07:00.744: INFO: Unexpected error: Failed to list the CronJobs in namespace cronjob-4130: <*url.Error | 0xc002bf37a0>: { Op: "Get", URL: "https://35.230.67.129/apis/batch/v1/namespaces/cronjob-4130/jobs", Err: <*net.OpError | 0xc0026b1ef0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002d6c120>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002cc0120>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:07:00.744: FAIL: Failed to list the CronJobs in namespace cronjob-4130: Get "https://35.230.67.129/apis/batch/v1/namespaces/cronjob-4130/jobs": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func2.2() test/e2e/apps/cronjob.go:111 +0x376 [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 26 04:07:00.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:07:00.784 STEP: Collecting events from namespace "cronjob-4130". 11/26/22 04:07:00.784 Nov 26 04:07:00.823: INFO: Unexpected error: failed to list events in namespace "cronjob-4130": <*url.Error | 0xc002bf3c20>: { Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/cronjob-4130/events", Err: <*net.OpError | 0xc002cfe190>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0025dd9b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002cc04a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:07:00.824: FAIL: failed to list events in namespace "cronjob-4130": Get "https://35.230.67.129/api/v1/namespaces/cronjob-4130/events": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0045445c0, {0xc001864bf0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002567380}, {0xc001864bf0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc004544650?, {0xc001864bf0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00027d860) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0012c1e50?, 0xc0018f2fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc002b668a8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0012c1e50?, 0x29449fc?}, {0xae73300?, 0xc0018f2f80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-4130" for this suite. 11/26/22 04:07:00.824 Nov 26 04:07:00.864: FAIL: Couldn't delete ns: "cronjob-4130": Delete "https://35.230.67.129/api/v1/namespaces/cronjob-4130": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/cronjob-4130", Err:(*net.OpError)(0xc002c2eb40)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00027d860) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0012c1da0?, 0xc000f09fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0012c1da0?, 0x0?}, {0xae73300?, 0x5?, 0xc002673698?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\snew\sjobs\swhen\sForbidConcurrent\s\[Slow\]\s\[Conformance\]$'
test/e2e/apps/cronjob.go:133 k8s.io/kubernetes/test/e2e/apps.glob..func2.3() test/e2e/apps/cronjob.go:133 +0x290 There were additional failures detected after the initial failure: [FAILED] Nov 26 04:07:01.002: failed to list events in namespace "cronjob-2038": Get "https://35.230.67.129/api/v1/namespaces/cronjob-2038/events": dial tcp 35.230.67.129:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 04:07:01.043: Couldn't delete ns: "cronjob-2038": Delete "https://35.230.67.129/api/v1/namespaces/cronjob-2038": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/cronjob-2038", Err:(*net.OpError)(0xc00316b090)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:06:42.384 Nov 26 04:06:42.384: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/26/22 04:06:42.386 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:06:42.532 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:06:42.615 [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] test/e2e/apps/cronjob.go:124 STEP: Creating a ForbidConcurrent cronjob 11/26/22 04:06:42.732 STEP: Ensuring a job is scheduled 11/26/22 04:06:42.882 Nov 26 04:07:00.922: INFO: Unexpected error: Failed to schedule CronJob forbid: <*url.Error | 0xc003ae53e0>: { Op: "Get", URL: "https://35.230.67.129/apis/batch/v1/namespaces/cronjob-2038/cronjobs/forbid", Err: <*net.OpError | 0xc0016a2c30>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00370e6f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000c78740>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:07:00.922: FAIL: Failed to schedule CronJob forbid: Get "https://35.230.67.129/apis/batch/v1/namespaces/cronjob-2038/cronjobs/forbid": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func2.3() test/e2e/apps/cronjob.go:133 +0x290 [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 26 04:07:00.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:07:00.962 STEP: Collecting events from namespace "cronjob-2038". 11/26/22 04:07:00.962 Nov 26 04:07:01.002: INFO: Unexpected error: failed to list events in namespace "cronjob-2038": <*url.Error | 0xc003ae5860>: { Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/cronjob-2038/events", Err: <*net.OpError | 0xc0016a2e60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00370ed80>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000c78ae0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:07:01.002: FAIL: failed to list events in namespace "cronjob-2038": Get "https://35.230.67.129/api/v1/namespaces/cronjob-2038/events": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0021fa5c0, {0xc000736a40, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc00369a000}, {0xc000736a40, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0021fa650?, {0xc000736a40?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0011a6870) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000efa630?, 0x13?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000efa630?, 0x29449fc?}, {0xae73300?, 0xc001222f80?, 0xc00124a648?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-2038" for this suite. 11/26/22 04:07:01.003 Nov 26 04:07:01.043: FAIL: Couldn't delete ns: "cronjob-2038": Delete "https://35.230.67.129/api/v1/namespaces/cronjob-2038": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/cronjob-2038", Err:(*net.OpError)(0xc00316b090)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0011a6870) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000efa380?, 0x13?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000efa380?, 0xc001224768?}, {0xae73300?, 0x801de88?, 0xc00369a000?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0010441e0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:17:04.428 Nov 26 04:17:04.428: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/26/22 04:17:04.43 Nov 26 04:17:04.469: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:06.509: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:08.509: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:10.509: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:12.510: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:14.509: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:16.509: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:18.509: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:20.509: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:22.511: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:24.510: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:26.509: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:28.509: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:30.509: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:32.509: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:34.509: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:34.551: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:34.551: INFO: Unexpected error: <*errors.errorString | 0xc0001c99e0>: { s: "timed out waiting for the condition", } Nov 26 04:17:34.551: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0010441e0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 26 04:17:34.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:17:34.594 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011c82d0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:11:38.955 Nov 26 04:11:38.955: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/26/22 04:11:38.957 ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:38.996: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:41.036: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:13:45.726: INFO: Unexpected error: <*fmt.wrapError | 0xc003444c40>: { msg: "wait for service account \"default\" in namespace \"statefulset-6726\": timed out waiting for the condition", err: <*errors.errorString | 0xc000207d40>{ s: "timed out waiting for the condition", }, } Nov 26 04:13:45.726: FAIL: wait for service account "default" in namespace "statefulset-6726": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011c82d0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 26 04:13:45.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:13:45.813 STEP: Collecting events from namespace "statefulset-6726". 11/26/22 04:13:45.814 STEP: Found 0 events. 11/26/22 04:13:45.855 Nov 26 04:13:45.898: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 04:13:45.898: INFO: Nov 26 04:13:45.943: INFO: Logging node info for node bootstrap-e2e-master Nov 26 04:13:45.989: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 760c1d8a-0b99-4fc2-b794-2f7d92be53de 3106 0 2022-11-26 04:04:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 04:11:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:56 +0000 UTC,LastTransitionTime:2022-11-26 04:04:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:11:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:11:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:11:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:11:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.67.129,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1db1e6ecd86ec3076124edcf143d32b8,SystemUUID:1db1e6ec-d86e-c307-6124-edcf143d32b8,BootID:d37249f8-fcc3-445d-9b62-007b3da4145b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 04:13:45.989: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 04:13:46.045: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 04:13:46.162: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 04:03:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:46.162: INFO: Container kube-controller-manager ready: false, restart count 5 Nov 26 04:13:46.162: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 04:03:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:46.162: INFO: Container kube-scheduler ready: true, restart count 3 Nov 26 04:13:46.162: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 04:03:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:46.162: INFO: Container etcd-container ready: true, restart count 1 Nov 26 04:13:46.162: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 04:04:13 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:46.162: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 26 04:13:46.162: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 04:03:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:46.162: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 26 04:13:46.162: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 04:03:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:46.162: INFO: Container kube-apiserver ready: true, restart count 3 Nov 26 04:13:46.162: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 04:04:13 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:46.162: INFO: Container l7-lb-controller ready: false, restart count 5 Nov 26 04:13:46.162: INFO: metadata-proxy-v0.1-jjfnm started at 2022-11-26 04:04:45 +0000 UTC (0+2 container statuses recorded) Nov 26 04:13:46.162: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 04:13:46.162: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 04:13:46.162: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 04:03:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:46.162: INFO: Container etcd-container ready: true, restart count 1 Nov 26 04:13:46.846: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 04:13:46.846: INFO: Logging node info for node bootstrap-e2e-minion-group-hf2n Nov 26 04:13:46.919: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hf2n b4560199-9a8d-465c-8e47-040369336248 2989 0 2022-11-26 04:04:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hf2n kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-hf2n topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-6912":"bootstrap-e2e-minion-group-hf2n","csi-mock-csi-mock-volumes-2270":"csi-mock-csi-mock-volumes-2270"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 04:09:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 04:09:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {node-problem-detector Update v1 2022-11-26 04:09:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-hf2n,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:46 +0000 UTC,LastTransitionTime:2022-11-26 04:04:46 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:09:12 +0000 UTC,LastTransitionTime:2022-11-26 04:04:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:09:12 +0000 UTC,LastTransitionTime:2022-11-26 04:04:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:09:12 +0000 UTC,LastTransitionTime:2022-11-26 04:04:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:09:12 +0000 UTC,LastTransitionTime:2022-11-26 04:04:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.126.198,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hf2n.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hf2n.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:43494faaeaac761c8a8b57a289b127cf,SystemUUID:43494faa-eaac-761c-8a8b-57a289b127cf,BootID:eee28487-7bcb-4974-ba88-3d521fe377c8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-6912^0c716e2c-6d40-11ed-8f08-5eef2a28a1fb],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-6912^0c716e2c-6d40-11ed-8f08-5eef2a28a1fb,DevicePath:,},},Config:nil,},} Nov 26 04:13:46.919: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hf2n Nov 26 04:13:47.002: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hf2n Nov 26 04:13:47.121: INFO: httpd started at 2022-11-26 04:06:45 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.121: INFO: Container httpd ready: true, restart count 1 Nov 26 04:13:47.121: INFO: coredns-6d97d5ddb-5dpbv started at 2022-11-26 04:04:47 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.121: INFO: Container coredns ready: true, restart count 4 Nov 26 04:13:47.121: INFO: volume-snapshot-controller-0 started at 2022-11-26 04:04:47 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.121: INFO: Container volume-snapshot-controller ready: true, restart count 4 Nov 26 04:13:47.121: INFO: hostpath-injector started at 2022-11-26 04:09:06 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.121: INFO: Container hostpath-injector ready: true, restart count 2 Nov 26 04:13:47.121: INFO: metadata-proxy-v0.1-ggcqp started at 2022-11-26 04:04:37 +0000 UTC (0+2 container statuses recorded) Nov 26 04:13:47.121: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 04:13:47.121: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 04:13:47.121: INFO: netserver-0 started at 2022-11-26 04:09:13 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.121: INFO: Container webserver ready: true, restart count 3 Nov 26 04:13:47.121: INFO: csi-mockplugin-0 started at 2022-11-26 04:06:46 +0000 UTC (0+4 container statuses recorded) Nov 26 04:13:47.121: INFO: Container busybox ready: false, restart count 2 Nov 26 04:13:47.121: INFO: Container csi-provisioner ready: false, restart count 2 Nov 26 04:13:47.121: INFO: Container driver-registrar ready: true, restart count 3 Nov 26 04:13:47.121: INFO: Container mock ready: true, restart count 3 Nov 26 04:13:47.121: INFO: netserver-0 started at 2022-11-26 04:06:53 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.121: INFO: Container webserver ready: false, restart count 5 Nov 26 04:13:47.121: INFO: csi-hostpathplugin-0 started at 2022-11-26 04:06:56 +0000 UTC (0+7 container statuses recorded) Nov 26 04:13:47.121: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 04:13:47.121: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 04:13:47.121: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 04:13:47.121: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 04:13:47.121: INFO: Container hostpath ready: true, restart count 3 Nov 26 04:13:47.121: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 04:13:47.121: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 04:13:47.121: INFO: kube-dns-autoscaler-5f6455f985-ddfgx started at 2022-11-26 04:04:47 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.121: INFO: Container autoscaler ready: false, restart count 4 Nov 26 04:13:47.121: INFO: l7-default-backend-8549d69d99-cf5wx started at 2022-11-26 04:04:47 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.121: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 04:13:47.121: INFO: kube-proxy-bootstrap-e2e-minion-group-hf2n started at 2022-11-26 04:04:36 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.121: INFO: Container kube-proxy ready: true, restart count 5 Nov 26 04:13:47.121: INFO: konnectivity-agent-qfdfv started at 2022-11-26 04:04:48 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.121: INFO: Container konnectivity-agent ready: true, restart count 4 Nov 26 04:13:47.721: INFO: Latency metrics for node bootstrap-e2e-minion-group-hf2n Nov 26 04:13:47.721: INFO: Logging node info for node bootstrap-e2e-minion-group-qxpt Nov 26 04:13:47.762: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-qxpt 3e62ac5b-dc9d-49e8-94e3-a204fdd36aeb 3022 0 2022-11-26 04:04:45 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-qxpt kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 04:04:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 04:08:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-26 04:09:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-qxpt,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:09:50 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:09:50 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:09:50 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:09:50 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:09:50 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:09:50 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:09:50 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:56 +0000 UTC,LastTransitionTime:2022-11-26 04:04:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:08:50 +0000 UTC,LastTransitionTime:2022-11-26 04:04:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:08:50 +0000 UTC,LastTransitionTime:2022-11-26 04:04:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:08:50 +0000 UTC,LastTransitionTime:2022-11-26 04:04:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:08:50 +0000 UTC,LastTransitionTime:2022-11-26 04:04:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.120.88,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-qxpt.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-qxpt.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:099dc3b50152230c991041456d403315,SystemUUID:099dc3b5-0152-230c-9910-41456d403315,BootID:079e8ec2-fde9-4e37-907f-be0a19459444,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 04:13:47.762: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-qxpt Nov 26 04:13:47.806: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-qxpt Nov 26 04:13:47.898: INFO: netserver-1 started at 2022-11-26 04:09:13 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container webserver ready: false, restart count 4 Nov 26 04:13:47.898: INFO: metadata-proxy-v0.1-fbqgp started at 2022-11-26 04:04:46 +0000 UTC (0+2 container statuses recorded) Nov 26 04:13:47.898: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 04:13:47.898: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 04:13:47.898: INFO: test-hostpath-type-75qnt started at 2022-11-26 04:06:43 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 04:13:47.898: INFO: pod-6283113a-983d-4b52-9602-e11375b10f5d started at 2022-11-26 04:08:46 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container write-pod ready: false, restart count 0 Nov 26 04:13:47.898: INFO: mutability-test-wwdrh started at 2022-11-26 04:08:34 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container netexec ready: true, restart count 4 Nov 26 04:13:47.898: INFO: kube-proxy-bootstrap-e2e-minion-group-qxpt started at 2022-11-26 04:04:45 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container kube-proxy ready: true, restart count 4 Nov 26 04:13:47.898: INFO: metrics-server-v0.5.2-867b8754b9-2hbm5 started at 2022-11-26 04:05:12 +0000 UTC (0+2 container statuses recorded) Nov 26 04:13:47.898: INFO: Container metrics-server ready: false, restart count 4 Nov 26 04:13:47.898: INFO: Container metrics-server-nanny ready: false, restart count 5 Nov 26 04:13:47.898: INFO: hostexec-bootstrap-e2e-minion-group-qxpt-p54cx started at 2022-11-26 04:08:33 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 04:13:47.898: INFO: konnectivity-agent-wq6jb started at 2022-11-26 04:04:56 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container konnectivity-agent ready: false, restart count 4 Nov 26 04:13:47.898: INFO: test-hostpath-type-j7vw8 started at 2022-11-26 04:09:13 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container host-path-sh-testing ready: false, restart count 0 Nov 26 04:13:47.898: INFO: nfs-server started at 2022-11-26 04:06:45 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container nfs-server ready: false, restart count 3 Nov 26 04:13:47.898: INFO: forbid-27823928-pp6dq started at 2022-11-26 04:08:33 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container c ready: true, restart count 1 Nov 26 04:13:47.898: INFO: csi-mockplugin-0 started at 2022-11-26 04:06:46 +0000 UTC (0+4 container statuses recorded) Nov 26 04:13:47.898: INFO: Container busybox ready: false, restart count 3 Nov 26 04:13:47.898: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 04:13:47.898: INFO: Container driver-registrar ready: false, restart count 5 Nov 26 04:13:47.898: INFO: Container mock ready: false, restart count 5 Nov 26 04:13:47.898: INFO: netserver-1 started at 2022-11-26 04:06:53 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container webserver ready: false, restart count 4 Nov 26 04:13:47.898: INFO: pod-secrets-adfbeb31-644e-4291-99dd-25f663a7d510 started at 2022-11-26 04:08:35 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 04:13:47.898: INFO: external-local-update-952bk started at 2022-11-26 04:09:10 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container netexec ready: true, restart count 1 Nov 26 04:13:47.898: INFO: test-hostpath-type-476bt started at 2022-11-26 04:08:59 +0000 UTC (0+1 container statuses recorded) Nov 26 04:13:47.898: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 04:13:50.572: INFO: Logging node info for node bootstrap-e2e-minion-group-vw8q Nov 26 04:13:50.617: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-vw8q 92c43d97-c454-4272-a2ff-80b54b16ce44 2991 0 2022-11-26 04:04:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-vw8q kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 04:04:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 04:09:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-26 04:09:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-vw8q,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:46 +0000 UTC,LastTransitionTime:2022-11-26 04:04:46 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:09:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.105.38.125,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-vw8q.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-vw8q.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ea42ad218a1f1be71bdfbef5adca0e4,SystemUUID:3ea42ad2-18a1-f1be-71bd-fbef5adca0e4,BootID:f248d9e8-3716-4260-b005-1cc522930f08,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 04:13:50.617: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-vw8q Nov 26 04:13:50.667: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-vw8q Nov 26 04:13:50.712: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-vw8q: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-6726" for this suite. 11/26/22 04:13:50.712
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\ssupport\sInClusterConfig\swith\stoken\srotation\s\[Slow\]$'
test/e2e/auth/service_accounts.go:520 k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9abfrom junit_01.xml
[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:06:42.398 Nov 26 04:06:42.398: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename svcaccounts 11/26/22 04:06:42.4 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:06:42.548 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:06:42.629 [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 [It] should support InClusterConfig with token rotation [Slow] test/e2e/auth/service_accounts.go:432 Nov 26 04:06:43.097: INFO: created pod Nov 26 04:06:43.097: INFO: Waiting up to 1m0s for 1 pods to be running and ready: [inclusterclient] Nov 26 04:06:43.097: INFO: Waiting up to 1m0s for pod "inclusterclient" in namespace "svcaccounts-6320" to be "running and ready" Nov 26 04:06:43.252: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 155.066953ms Nov 26 04:06:43.252: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-vw8q' to be 'Running' but was 'Pending' Nov 26 04:06:45.297: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200220443s Nov 26 04:06:45.297: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-vw8q' to be 'Running' but was 'Pending' Nov 26 04:06:47.296: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198692485s Nov 26 04:06:47.296: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-vw8q' to be 'Running' but was 'Pending' Nov 26 04:06:49.309: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212150394s Nov 26 04:06:49.309: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-vw8q' to be 'Running' but was 'Pending' Nov 26 04:06:51.295: INFO: Pod "inclusterclient": Phase="Running", Reason="", readiness=true. Elapsed: 8.197504196s Nov 26 04:06:51.295: INFO: Pod "inclusterclient" satisfied condition "running and ready" Nov 26 04:06:51.295: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [inclusterclient] Nov 26 04:06:51.295: INFO: pod is ready Nov 26 04:07:51.296: INFO: polling logs Nov 26 04:07:51.440: FAIL: Unexpected error: inclusterclient reported an error: saw status=failed I1126 04:06:48.444155 1 main.go:61] started I1126 04:07:18.445444 1 main.go:79] calling /healthz I1126 04:07:18.445791 1 main.go:96] authz_header=6c2fe0Z4VzbPXhceM5nA7BwHj1Zv6dvB6nDadJY-beA E1126 04:07:18.446725 1 main.go:82] status=failed E1126 04:07:18.446747 1 main.go:83] error checking /healthz: Get "https://10.0.0.1:443/healthz": dial tcp 10.0.0.1:443: connect: connection refused I1126 04:07:48.445417 1 main.go:79] calling /healthz I1126 04:07:48.445638 1 main.go:96] authz_header=6c2fe0Z4VzbPXhceM5nA7BwHj1Zv6dvB6nDadJY-beA Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9ab [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 Nov 26 04:07:51.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:07:51.522 STEP: Collecting events from namespace "svcaccounts-6320". 11/26/22 04:07:51.522 STEP: Found 5 events. 11/26/22 04:07:51.564 Nov 26 04:07:51.564: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for inclusterclient: { } Scheduled: Successfully assigned svcaccounts-6320/inclusterclient to bootstrap-e2e-minion-group-vw8q Nov 26 04:07:51.564: INFO: At 2022-11-26 04:06:44 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-vw8q} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-e2e" : failed to sync configmap cache: timed out waiting for the condition Nov 26 04:07:51.564: INFO: At 2022-11-26 04:06:48 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-vw8q} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 04:07:51.564: INFO: At 2022-11-26 04:06:48 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-vw8q} Created: Created container inclusterclient Nov 26 04:07:51.564: INFO: At 2022-11-26 04:06:48 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-vw8q} Started: Started container inclusterclient Nov 26 04:07:51.607: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 04:07:51.607: INFO: inclusterclient bootstrap-e2e-minion-group-vw8q Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:43 +0000 UTC }] Nov 26 04:07:51.607: INFO: Nov 26 04:07:51.700: INFO: Logging node info for node bootstrap-e2e-master Nov 26 04:07:51.742: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 760c1d8a-0b99-4fc2-b794-2f7d92be53de 619 0 2022-11-26 04:04:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 04:05:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:56 +0000 UTC,LastTransitionTime:2022-11-26 04:04:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:05:00 +0000 UTC,LastTransitionTime:2022-11-26 04:04:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:05:00 +0000 UTC,LastTransitionTime:2022-11-26 04:04:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:05:00 +0000 UTC,LastTransitionTime:2022-11-26 04:04:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:05:00 +0000 UTC,LastTransitionTime:2022-11-26 04:04:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.67.129,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1db1e6ecd86ec3076124edcf143d32b8,SystemUUID:1db1e6ec-d86e-c307-6124-edcf143d32b8,BootID:d37249f8-fcc3-445d-9b62-007b3da4145b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 04:07:51.743: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 04:07:51.787: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 04:07:51.850: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 04:03:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:51.850: INFO: Container kube-apiserver ready: true, restart count 1 Nov 26 04:07:51.850: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 04:04:13 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:51.850: INFO: Container l7-lb-controller ready: false, restart count 3 Nov 26 04:07:51.850: INFO: metadata-proxy-v0.1-jjfnm started at 2022-11-26 04:04:45 +0000 UTC (0+2 container statuses recorded) Nov 26 04:07:51.850: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 04:07:51.850: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 04:07:51.850: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 04:03:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:51.850: INFO: Container etcd-container ready: true, restart count 1 Nov 26 04:07:51.850: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 04:03:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:51.850: INFO: Container kube-controller-manager ready: false, restart count 2 Nov 26 04:07:51.850: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 04:03:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:51.850: INFO: Container kube-scheduler ready: true, restart count 2 Nov 26 04:07:51.850: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 04:03:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:51.850: INFO: Container etcd-container ready: true, restart count 0 Nov 26 04:07:51.850: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 04:04:13 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:51.850: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 26 04:07:51.850: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 04:03:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:51.850: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 26 04:07:52.061: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 04:07:52.061: INFO: Logging node info for node bootstrap-e2e-minion-group-hf2n Nov 26 04:07:52.103: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hf2n b4560199-9a8d-465c-8e47-040369336248 1660 0 2022-11-26 04:04:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hf2n kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-hf2n topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-6912":"bootstrap-e2e-minion-group-hf2n","csi-mock-csi-mock-volumes-2270":"csi-mock-csi-mock-volumes-2270"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 04:04:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 04:04:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 04:07:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-hf2n,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:46 +0000 UTC,LastTransitionTime:2022-11-26 04:04:46 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:07:40 +0000 UTC,LastTransitionTime:2022-11-26 04:04:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:07:40 +0000 UTC,LastTransitionTime:2022-11-26 04:04:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:07:40 +0000 UTC,LastTransitionTime:2022-11-26 04:04:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:07:40 +0000 UTC,LastTransitionTime:2022-11-26 04:04:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.126.198,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hf2n.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hf2n.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:43494faaeaac761c8a8b57a289b127cf,SystemUUID:43494faa-eaac-761c-8a8b-57a289b127cf,BootID:eee28487-7bcb-4974-ba88-3d521fe377c8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 04:07:52.104: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hf2n Nov 26 04:07:52.147: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hf2n Nov 26 04:07:52.224: INFO: kube-proxy-bootstrap-e2e-minion-group-hf2n started at 2022-11-26 04:04:36 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container kube-proxy ready: true, restart count 2 Nov 26 04:07:52.224: INFO: konnectivity-agent-qfdfv started at 2022-11-26 04:04:48 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container konnectivity-agent ready: false, restart count 1 Nov 26 04:07:52.224: INFO: hostexec-bootstrap-e2e-minion-group-hf2n-bg8vb started at 2022-11-26 04:06:43 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 04:07:52.224: INFO: hostexec-bootstrap-e2e-minion-group-hf2n-fxzn4 started at 2022-11-26 04:06:43 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 04:07:52.224: INFO: coredns-6d97d5ddb-5dpbv started at 2022-11-26 04:04:47 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container coredns ready: true, restart count 1 Nov 26 04:07:52.224: INFO: volume-snapshot-controller-0 started at 2022-11-26 04:04:47 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container volume-snapshot-controller ready: true, restart count 2 Nov 26 04:07:52.224: INFO: httpd started at 2022-11-26 04:06:45 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container httpd ready: true, restart count 1 Nov 26 04:07:52.224: INFO: hostexec-bootstrap-e2e-minion-group-hf2n-jzx7k started at 2022-11-26 04:06:45 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 04:07:52.224: INFO: metadata-proxy-v0.1-ggcqp started at 2022-11-26 04:04:37 +0000 UTC (0+2 container statuses recorded) Nov 26 04:07:52.224: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 04:07:52.224: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 04:07:52.224: INFO: hostexec-bootstrap-e2e-minion-group-hf2n-h96tt started at 2022-11-26 04:06:43 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 04:07:52.224: INFO: affinity-lb-44vkz started at 2022-11-26 04:06:44 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container affinity-lb ready: true, restart count 0 Nov 26 04:07:52.224: INFO: hostexec-bootstrap-e2e-minion-group-hf2n-2fp8p started at 2022-11-26 04:06:44 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 04:07:52.224: INFO: hostexec-bootstrap-e2e-minion-group-hf2n-d5w9g started at 2022-11-26 04:06:45 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 04:07:52.224: INFO: kube-dns-autoscaler-5f6455f985-ddfgx started at 2022-11-26 04:04:47 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container autoscaler ready: true, restart count 1 Nov 26 04:07:52.224: INFO: l7-default-backend-8549d69d99-cf5wx started at 2022-11-26 04:04:47 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 04:07:52.224: INFO: csi-mockplugin-0 started at 2022-11-26 04:06:46 +0000 UTC (0+4 container statuses recorded) Nov 26 04:07:52.224: INFO: Container busybox ready: true, restart count 0 Nov 26 04:07:52.224: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 04:07:52.224: INFO: Container driver-registrar ready: true, restart count 0 Nov 26 04:07:52.224: INFO: Container mock ready: true, restart count 0 Nov 26 04:07:52.224: INFO: netserver-0 started at 2022-11-26 04:06:53 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.224: INFO: Container webserver ready: true, restart count 0 Nov 26 04:07:52.224: INFO: csi-hostpathplugin-0 started at 2022-11-26 04:06:56 +0000 UTC (0+7 container statuses recorded) Nov 26 04:07:52.224: INFO: Container csi-attacher ready: false, restart count 1 Nov 26 04:07:52.224: INFO: Container csi-provisioner ready: false, restart count 1 Nov 26 04:07:52.224: INFO: Container csi-resizer ready: false, restart count 1 Nov 26 04:07:52.224: INFO: Container csi-snapshotter ready: false, restart count 1 Nov 26 04:07:52.224: INFO: Container hostpath ready: false, restart count 1 Nov 26 04:07:52.224: INFO: Container liveness-probe ready: false, restart count 1 Nov 26 04:07:52.224: INFO: Container node-driver-registrar ready: false, restart count 1 Nov 26 04:07:52.433: INFO: Latency metrics for node bootstrap-e2e-minion-group-hf2n Nov 26 04:07:52.433: INFO: Logging node info for node bootstrap-e2e-minion-group-qxpt Nov 26 04:07:52.475: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-qxpt 3e62ac5b-dc9d-49e8-94e3-a204fdd36aeb 1652 0 2022-11-26 04:04:45 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-qxpt kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-911":"csi-mock-csi-mock-volumes-911"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 04:04:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 04:04:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 04:07:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-qxpt,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:04:49 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:04:49 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:04:49 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:04:49 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:04:49 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:04:49 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:04:49 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:56 +0000 UTC,LastTransitionTime:2022-11-26 04:04:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:07:38 +0000 UTC,LastTransitionTime:2022-11-26 04:04:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:07:38 +0000 UTC,LastTransitionTime:2022-11-26 04:04:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:07:38 +0000 UTC,LastTransitionTime:2022-11-26 04:04:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:07:38 +0000 UTC,LastTransitionTime:2022-11-26 04:04:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.120.88,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-qxpt.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-qxpt.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:099dc3b50152230c991041456d403315,SystemUUID:099dc3b5-0152-230c-9910-41456d403315,BootID:079e8ec2-fde9-4e37-907f-be0a19459444,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 04:07:52.475: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-qxpt Nov 26 04:07:52.519: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-qxpt Nov 26 04:07:52.586: INFO: konnectivity-agent-wq6jb started at 2022-11-26 04:04:56 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.586: INFO: Container konnectivity-agent ready: true, restart count 2 Nov 26 04:07:52.586: INFO: hostexec-bootstrap-e2e-minion-group-qxpt-jnr6r started at 2022-11-26 04:06:43 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.586: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 04:07:52.586: INFO: nfs-server started at 2022-11-26 04:06:45 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.586: INFO: Container nfs-server ready: true, restart count 0 Nov 26 04:07:52.586: INFO: csi-mockplugin-0 started at 2022-11-26 04:06:46 +0000 UTC (0+4 container statuses recorded) Nov 26 04:07:52.586: INFO: Container busybox ready: true, restart count 0 Nov 26 04:07:52.586: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 04:07:52.586: INFO: Container driver-registrar ready: true, restart count 0 Nov 26 04:07:52.586: INFO: Container mock ready: true, restart count 0 Nov 26 04:07:52.586: INFO: netserver-1 started at 2022-11-26 04:06:53 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.586: INFO: Container webserver ready: false, restart count 2 Nov 26 04:07:52.586: INFO: metadata-proxy-v0.1-fbqgp started at 2022-11-26 04:04:46 +0000 UTC (0+2 container statuses recorded) Nov 26 04:07:52.586: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 04:07:52.586: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 04:07:52.586: INFO: test-hostpath-type-75qnt started at 2022-11-26 04:06:43 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.586: INFO: Container host-path-testing ready: true, restart count 0 Nov 26 04:07:52.586: INFO: test-hostpath-type-mpp9x started at 2022-11-26 04:06:43 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.586: INFO: Container host-path-testing ready: true, restart count 0 Nov 26 04:07:52.586: INFO: affinity-lb-hj9hs started at 2022-11-26 04:06:44 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.586: INFO: Container affinity-lb ready: true, restart count 0 Nov 26 04:07:52.586: INFO: kube-proxy-bootstrap-e2e-minion-group-qxpt started at 2022-11-26 04:04:45 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.586: INFO: Container kube-proxy ready: true, restart count 2 Nov 26 04:07:52.586: INFO: metrics-server-v0.5.2-867b8754b9-2hbm5 started at 2022-11-26 04:05:12 +0000 UTC (0+2 container statuses recorded) Nov 26 04:07:52.586: INFO: Container metrics-server ready: true, restart count 1 Nov 26 04:07:52.586: INFO: Container metrics-server-nanny ready: true, restart count 1 Nov 26 04:07:52.586: INFO: hostexec-bootstrap-e2e-minion-group-qxpt-hmr2v started at 2022-11-26 04:06:43 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:52.586: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 04:07:52.868: INFO: Latency metrics for node bootstrap-e2e-minion-group-qxpt Nov 26 04:07:52.869: INFO: Logging node info for node bootstrap-e2e-minion-group-vw8q Nov 26 04:07:52.915: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-vw8q 92c43d97-c454-4272-a2ff-80b54b16ce44 1656 0 2022-11-26 04:04:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-vw8q kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 04:04:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 04:04:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 04:07:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-vw8q,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:04:41 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:46 +0000 UTC,LastTransitionTime:2022-11-26 04:04:46 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:07:39 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:07:39 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:07:39 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:07:39 +0000 UTC,LastTransitionTime:2022-11-26 04:04:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.105.38.125,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-vw8q.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-vw8q.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ea42ad218a1f1be71bdfbef5adca0e4,SystemUUID:3ea42ad2-18a1-f1be-71bd-fbef5adca0e4,BootID:f248d9e8-3716-4260-b005-1cc522930f08,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 04:07:52.915: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-vw8q Nov 26 04:07:52.960: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-vw8q Nov 26 04:07:53.022: INFO: hostexec-bootstrap-e2e-minion-group-vw8q-92bnq started at 2022-11-26 04:06:43 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 04:07:53.022: INFO: affinity-lb-bkc2f started at 2022-11-26 04:06:43 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container affinity-lb ready: true, restart count 0 Nov 26 04:07:53.022: INFO: external-local-nodeport-jxqgw started at 2022-11-26 04:06:44 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container netexec ready: true, restart count 0 Nov 26 04:07:53.022: INFO: kube-proxy-bootstrap-e2e-minion-group-vw8q started at 2022-11-26 04:04:36 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container kube-proxy ready: true, restart count 3 Nov 26 04:07:53.022: INFO: metadata-proxy-v0.1-kfvtt started at 2022-11-26 04:04:37 +0000 UTC (0+2 container statuses recorded) Nov 26 04:07:53.022: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 04:07:53.022: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 04:07:53.022: INFO: test-hostpath-type-zpl9f started at 2022-11-26 04:06:43 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container host-path-testing ready: true, restart count 0 Nov 26 04:07:53.022: INFO: hostexec-bootstrap-e2e-minion-group-vw8q-wqrrv started at 2022-11-26 04:06:45 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 04:07:53.022: INFO: hostpath-symlink-prep-provisioning-8968 started at 2022-11-26 04:06:56 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container init-volume-provisioning-8968 ready: false, restart count 0 Nov 26 04:07:53.022: INFO: konnectivity-agent-fgvpk started at 2022-11-26 04:04:48 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 26 04:07:53.022: INFO: hostexec-bootstrap-e2e-minion-group-vw8q-pvscf started at 2022-11-26 04:06:45 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 04:07:53.022: INFO: netserver-2 started at 2022-11-26 04:06:53 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container webserver ready: false, restart count 2 Nov 26 04:07:53.022: INFO: coredns-6d97d5ddb-fl4w2 started at 2022-11-26 04:04:55 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container coredns ready: false, restart count 0 Nov 26 04:07:53.022: INFO: inclusterclient started at 2022-11-26 04:06:43 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container inclusterclient ready: true, restart count 0 Nov 26 04:07:53.022: INFO: test-hostpath-type-j2js7 started at 2022-11-26 04:06:45 +0000 UTC (0+1 container statuses recorded) Nov 26 04:07:53.022: INFO: Container host-path-testing ready: true, restart count 0 Nov 26 04:07:53.416: INFO: Latency metrics for node bootstrap-e2e-minion-group-vw8q [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 STEP: Destroying namespace "svcaccounts-6320" for this suite. 11/26/22 04:07:53.416
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swith\s\-\-leave\-stdin\-open$'
test/e2e/kubectl/kubectl.go:415 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 There were additional failures detected after the initial failure: [FAILED] Nov 26 04:07:01.519: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.67.129 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3245 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://35.230.67.129/api/v1/namespaces/kubectl-3245/pods/httpd": dial tcp 35.230.67.129:443: connect: connection refused error: exit status 1 In [AfterEach] at: test/e2e/framework/kubectl/builder.go:87 ---------- [FAILED] Nov 26 04:07:01.599: failed to list events in namespace "kubectl-3245": Get "https://35.230.67.129/api/v1/namespaces/kubectl-3245/events": dial tcp 35.230.67.129:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 04:07:01.639: Couldn't delete ns: "kubectl-3245": Delete "https://35.230.67.129/api/v1/namespaces/kubectl-3245": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/kubectl-3245", Err:(*net.OpError)(0xc0000fafa0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:06:42.042 Nov 26 04:06:42.042: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 04:06:42.044 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:06:42.251 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:06:42.332 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/26/22 04:06:42.414 Nov 26 04:06:42.414: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.67.129 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3245 create -f -' Nov 26 04:06:45.310: INFO: stderr: "" Nov 26 04:06:45.310: INFO: stdout: "pod/httpd created\n" Nov 26 04:06:45.310: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 26 04:06:45.310: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-3245" to be "running and ready" Nov 26 04:06:45.352: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 41.584424ms Nov 26 04:06:45.352: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-hf2n' to be 'Running' but was 'Pending' Nov 26 04:06:47.393: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083098057s Nov 26 04:06:47.393: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-hf2n' to be 'Running' but was 'Pending' Nov 26 04:06:49.421: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111281385s Nov 26 04:06:49.421: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-hf2n' to be 'Running' but was 'Pending' Nov 26 04:06:51.394: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083582674s Nov 26 04:06:51.394: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-hf2n' to be 'Running' but was 'Pending' Nov 26 04:06:53.397: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087406315s Nov 26 04:06:53.397: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-hf2n' to be 'Running' but was 'Pending' Nov 26 04:06:55.393: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.083255047s Nov 26 04:06:55.393: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-hf2n' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:45 +0000 UTC }] Nov 26 04:06:57.394: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.08401768s Nov 26 04:06:57.394: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-hf2n' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:45 +0000 UTC }] Nov 26 04:06:59.394: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.083829933s Nov 26 04:06:59.394: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-hf2n' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:06:45 +0000 UTC }] Nov 26 04:07:01.392: INFO: Encountered non-retryable error while getting pod kubectl-3245/httpd: Get "https://35.230.67.129/api/v1/namespaces/kubectl-3245/pods/httpd": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:01.392: INFO: Pod httpd failed to be running and ready. Nov 26 04:07:01.392: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [httpd] Nov 26 04:07:01.392: FAIL: Expected <bool>: false to equal <bool>: true Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/26/22 04:07:01.393 Nov 26 04:07:01.393: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.67.129 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3245 delete --grace-period=0 --force -f -' Nov 26 04:07:01.519: INFO: rc: 1 Nov 26 04:07:01.519: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc000be72e0>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.67.129 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3245 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://35.230.67.129/api/v1/namespaces/kubectl-3245/pods/httpd\": dial tcp 35.230.67.129:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 26 04:07:01.519: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.67.129 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3245 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://35.230.67.129/api/v1/namespaces/kubectl-3245/pods/httpd": dial tcp 35.230.67.129:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc000becf20?, 0x0?}, {0xc0015229b0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc0015229b0, 0xc}, {0xc001700000, 0x145}, {0xc000ce1ec0?, 0x8?, 0x7fca53236a68?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc001700000, 0x145}, {0xc0015229b0, 0xc}, {0xc000be6f10, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 04:07:01.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:07:01.559 STEP: Collecting events from namespace "kubectl-3245". 11/26/22 04:07:01.56 Nov 26 04:07:01.599: INFO: Unexpected error: failed to list events in namespace "kubectl-3245": <*url.Error | 0xc00156d2f0>: { Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/kubectl-3245/events", Err: <*net.OpError | 0xc004dfdea0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0018130e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0011ee540>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:07:01.599: FAIL: failed to list events in namespace "kubectl-3245": Get "https://35.230.67.129/api/v1/namespaces/kubectl-3245/events": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0005d85c0, {0xc0015229b0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc000339380}, {0xc0015229b0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0005d8650?, {0xc0015229b0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000f402d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00116ce70?, 0xc001068fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00040b748?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00116ce70?, 0x29449fc?}, {0xae73300?, 0xc001068f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-3245" for this suite. 11/26/22 04:07:01.6 Nov 26 04:07:01.639: FAIL: Couldn't delete ns: "kubectl-3245": Delete "https://35.230.67.129/api/v1/namespaces/kubectl-3245": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/kubectl-3245", Err:(*net.OpError)(0xc0000fafa0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f402d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00116cd60?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x1?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00116cd60?, 0x0?}, {0xae73300?, 0x0?, 0xc0020ce9ee?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0008ee2d0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:17:06.744 Nov 26 04:17:06.745: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 04:17:06.749 Nov 26 04:17:06.790: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:08.830: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:10.843: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:12.830: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:14.829: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:16.829: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:18.830: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:20.830: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:22.830: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:24.829: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:26.829: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:28.830: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:30.830: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:32.829: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:34.829: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:36.830: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:36.869: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:36.869: INFO: Unexpected error: <*errors.errorString | 0xc000295d80>: { s: "timed out waiting for the condition", } Nov 26 04:17:36.869: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0008ee2d0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 04:17:36.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:17:36.909 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never\,\sbut\swith\s\-\-rm$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000c862d0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:20:16.749 Nov 26 04:20:16.750: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 04:20:16.751 Nov 26 04:20:16.791: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:18.831: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:20.831: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:22.842: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:24.830: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:26.831: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:28.831: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:30.831: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:32.830: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:34.831: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:36.831: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:38.831: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:40.831: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:42.831: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:44.831: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:46.831: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:46.870: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:46.870: INFO: Unexpected error: <*errors.errorString | 0xc00017da30>: { s: "timed out waiting for the condition", } Nov 26 04:20:46.870: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000c862d0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 04:20:46.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:20:46.911 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\shandle\supdates\sto\sExternalTrafficPolicy\sfield$'
test/e2e/network/loadbalancer.go:1513 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1513 +0x2bf There were additional failures detected after the initial failure: [FAILED] Nov 26 04:11:17.388: failed to list events in namespace "esipp-2632": Get "https://35.230.67.129/api/v1/namespaces/esipp-2632/events": dial tcp 35.230.67.129:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 04:11:17.428: Couldn't delete ns: "esipp-2632": Delete "https://35.230.67.129/api/v1/namespaces/esipp-2632": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/esipp-2632", Err:(*net.OpError)(0xc002357a40)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:08:17.348 Nov 26 04:08:17.348: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 04:08:17.35 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:08:32.839 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:08:32.919 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should handle updates to ExternalTrafficPolicy field test/e2e/network/loadbalancer.go:1480 STEP: creating a service esipp-2632/external-local-update with type=LoadBalancer 11/26/22 04:08:34.172 STEP: setting ExternalTrafficPolicy=Local 11/26/22 04:08:34.172 STEP: waiting for loadbalancer for service esipp-2632/external-local-update 11/26/22 04:08:34.384 Nov 26 04:08:34.384: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: creating a pod to be part of the service external-local-update 11/26/22 04:09:10.593 Nov 26 04:09:10.767: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 04:09:10.856: INFO: Found 0/1 pods - will retry Nov 26 04:09:12.905: INFO: Found all 1 pods Nov 26 04:09:12.905: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-update-952bk] Nov 26 04:09:12.905: INFO: Waiting up to 2m0s for pod "external-local-update-952bk" in namespace "esipp-2632" to be "running and ready" Nov 26 04:09:12.962: INFO: Pod "external-local-update-952bk": Phase="Pending", Reason="", readiness=false. Elapsed: 57.050374ms Nov 26 04:09:12.962: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-952bk' on 'bootstrap-e2e-minion-group-qxpt' to be 'Running' but was 'Pending' Nov 26 04:09:15.032: INFO: Pod "external-local-update-952bk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127353786s Nov 26 04:09:15.032: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-952bk' on 'bootstrap-e2e-minion-group-qxpt' to be 'Running' but was 'Pending' Nov 26 04:09:17.005: INFO: Pod "external-local-update-952bk": Phase="Running", Reason="", readiness=true. Elapsed: 4.099647112s Nov 26 04:09:17.005: INFO: Pod "external-local-update-952bk" satisfied condition "running and ready" Nov 26 04:09:17.005: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-update-952bk] STEP: waiting for loadbalancer for service esipp-2632/external-local-update 11/26/22 04:09:17.005 Nov 26 04:09:17.005: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: turning ESIPP off 11/26/22 04:09:17.046 Nov 26 04:11:17.147: INFO: Unexpected error: <*errors.errorString | 0xc0011969f0>: { s: "no subset of available IP address found for the endpoint external-local-update within timeout 2m0s", } Nov 26 04:11:17.147: FAIL: no subset of available IP address found for the endpoint external-local-update within timeout 2m0s Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1513 +0x2bf Nov 26 04:11:17.186: INFO: Unexpected error: <*errors.errorString | 0xc0012fc800>: { s: "failed to get Service \"external-local-update\": Get \"https://35.230.67.129/api/v1/namespaces/esipp-2632/services/external-local-update\": dial tcp 35.230.67.129:443: connect: connection refused", } Nov 26 04:11:17.187: FAIL: failed to get Service "external-local-update": Get "https://35.230.67.129/api/v1/namespaces/esipp-2632/services/external-local-update": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.7.1() test/e2e/network/loadbalancer.go:1495 +0xae panic({0x70eb7e0, 0xc0001428c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00059a0e0, 0x62}, {0xc000aef6b8?, 0xc00059a0e0?, 0xc000aef6e0?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3ee0, 0xc0011969f0}, {0x0?, 0x7607921?, 0x15?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1513 +0x2bf [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 04:11:17.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 04:11:17.226: INFO: Output of kubectl describe svc: Nov 26 04:11:17.226: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.67.129 --kubeconfig=/workspace/.kube/config --namespace=esipp-2632 describe svc --namespace=esipp-2632' Nov 26 04:11:17.348: INFO: rc: 1 Nov 26 04:11:17.348: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:11:17.348 STEP: Collecting events from namespace "esipp-2632". 11/26/22 04:11:17.348 Nov 26 04:11:17.388: INFO: Unexpected error: failed to list events in namespace "esipp-2632": <*url.Error | 0xc0025dca20>: { Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/esipp-2632/events", Err: <*net.OpError | 0xc000e63270>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00261b920>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0010ce520>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:11:17.388: FAIL: failed to list events in namespace "esipp-2632": Get "https://35.230.67.129/api/v1/namespaces/esipp-2632/events": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0009be5c0, {0xc00181ba90, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0021f5860}, {0xc00181ba90, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0009be650?, {0xc00181ba90?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc001214000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000c15c90?, 0xc001a05f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000c15c90?, 0x7fadfa0?}, {0xae73300?, 0xc001a05f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-2632" for this suite. 11/26/22 04:11:17.389 Nov 26 04:11:17.428: FAIL: Couldn't delete ns: "esipp-2632": Delete "https://35.230.67.129/api/v1/namespaces/esipp-2632": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/esipp-2632", Err:(*net.OpError)(0xc002357a40)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001214000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000c15b70?, 0x66e0100?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000c15b70?, 0xc002661f68?}, {0xae73300?, 0x801de88?, 0xc0021f5860?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\sonly\starget\snodes\swith\sendpoints$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0013a4690) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:20:07.568 Nov 26 04:20:07.568: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 04:20:07.57 ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:07.609: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:09.650: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:11.649: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:13.650: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:15.650: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:17.649: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:19.649: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:21.650: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:23.649: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:25.650: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:27.649: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:29.650: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:31.650: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:33.649: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:35.649: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:37.650: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:37.689: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:37.689: INFO: Unexpected error: <*errors.errorString | 0xc0001fda30>: { s: "timed out waiting for the condition", } Nov 26 04:20:37.689: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0013a4690) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 04:20:37.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:20:37.743 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=LoadBalancer$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000fcc000) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:18:05.506 Nov 26 04:18:05.506: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 04:18:05.508 Nov 26 04:18:05.547: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:07.602: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:09.587: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:11.586: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:13.587: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:15.594: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:17.600: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:19.594: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:21.587: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:23.587: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:25.594: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:27.587: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:29.593: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:31.598: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:33.588: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:35.588: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:35.627: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:35.627: INFO: Unexpected error: <*errors.errorString | 0xc000285cc0>: { s: "timed out waiting for the condition", } Nov 26 04:18:35.627: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000fcc000) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 04:18:35.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:18:35.667 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=NodePort$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0042ee700, {0x75c6f7c, 0x9}, 0xc0021dffb0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0042ee700, 0x7f2b9cdeb570?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0042ee700, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000bf6000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 There were additional failures detected after the initial failure: [FAILED] Nov 26 04:07:01.983: failed to list events in namespace "esipp-7609": Get "https://35.230.67.129/api/v1/namespaces/esipp-7609/events": dial tcp 35.230.67.129:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 04:07:02.023: Couldn't delete ns: "esipp-7609": Delete "https://35.230.67.129/api/v1/namespaces/esipp-7609": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/esipp-7609", Err:(*net.OpError)(0xc003860000)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:06:42.444 Nov 26 04:06:42.444: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 04:06:42.448 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:06:42.604 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:06:42.729 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=NodePort test/e2e/network/loadbalancer.go:1314 STEP: creating a service esipp-7609/external-local-nodeport with type=NodePort and ExternalTrafficPolicy=Local 11/26/22 04:06:43.418 STEP: creating a pod to be part of the service external-local-nodeport 11/26/22 04:06:43.829 Nov 26 04:06:43.964: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 04:06:44.193: INFO: Found 0/1 pods - will retry Nov 26 04:06:46.244: INFO: Found all 1 pods Nov 26 04:06:46.244: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-nodeport-jxqgw] Nov 26 04:06:46.244: INFO: Waiting up to 2m0s for pod "external-local-nodeport-jxqgw" in namespace "esipp-7609" to be "running and ready" Nov 26 04:06:46.284: INFO: Pod "external-local-nodeport-jxqgw": Phase="Pending", Reason="", readiness=false. Elapsed: 40.853918ms Nov 26 04:06:46.284: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-jxqgw' on 'bootstrap-e2e-minion-group-vw8q' to be 'Running' but was 'Pending' Nov 26 04:06:48.332: INFO: Pod "external-local-nodeport-jxqgw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088807515s Nov 26 04:06:48.332: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-jxqgw' on 'bootstrap-e2e-minion-group-vw8q' to be 'Running' but was 'Pending' Nov 26 04:06:50.335: INFO: Pod "external-local-nodeport-jxqgw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091661368s Nov 26 04:06:50.335: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-jxqgw' on 'bootstrap-e2e-minion-group-vw8q' to be 'Running' but was 'Pending' Nov 26 04:06:52.327: INFO: Pod "external-local-nodeport-jxqgw": Phase="Running", Reason="", readiness=true. Elapsed: 6.083157881s Nov 26 04:06:52.327: INFO: Pod "external-local-nodeport-jxqgw" satisfied condition "running and ready" Nov 26 04:06:52.327: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-nodeport-jxqgw] STEP: Performing setup for networking test in namespace esipp-7609 11/26/22 04:06:53.418 STEP: creating a selector 11/26/22 04:06:53.418 STEP: Creating the service pods in kubernetes 11/26/22 04:06:53.418 Nov 26 04:06:53.418: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 04:06:53.649: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-7609" to be "running and ready" Nov 26 04:06:53.693: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 43.190109ms Nov 26 04:06:53.693: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 04:06:55.737: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087188143s Nov 26 04:06:55.737: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 04:06:57.735: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.085111032s Nov 26 04:06:57.735: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:06:59.778: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.128050417s Nov 26 04:06:59.778: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:07:01.732: INFO: Encountered non-retryable error while getting pod esipp-7609/netserver-0: Get "https://35.230.67.129/api/v1/namespaces/esipp-7609/pods/netserver-0": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:01.732: INFO: Unexpected error: <*fmt.wrapError | 0xc00436eae0>: { msg: "error while waiting for pod esipp-7609/netserver-0 to be running and ready: Get \"https://35.230.67.129/api/v1/namespaces/esipp-7609/pods/netserver-0\": dial tcp 35.230.67.129:443: connect: connection refused", err: <*url.Error | 0xc0044b06c0>{ Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/esipp-7609/pods/netserver-0", Err: <*net.OpError | 0xc00423bd60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00417aa50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00436eaa0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 26 04:07:01.733: FAIL: error while waiting for pod esipp-7609/netserver-0 to be running and ready: Get "https://35.230.67.129/api/v1/namespaces/esipp-7609/pods/netserver-0": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0042ee700, {0x75c6f7c, 0x9}, 0xc0021dffb0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0042ee700, 0x7f2b9cdeb570?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0042ee700, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000bf6000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 Nov 26 04:07:01.772: INFO: Unexpected error: <*url.Error | 0xc00417aa80>: { Op: "Delete", URL: "https://35.230.67.129/api/v1/namespaces/esipp-7609/services/external-local-nodeport", Err: <*net.OpError | 0xc004144d70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0044b0a80>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002e34ba0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:07:01.773: FAIL: Delete "https://35.230.67.129/api/v1/namespaces/esipp-7609/services/external-local-nodeport": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.4.1() test/e2e/network/loadbalancer.go:1323 +0xe7 panic({0x70eb7e0, 0xc00033ff10}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc003607a00, 0xce}, {0xc003bc97c0?, 0xc003607a00?, 0xc003bc97e8?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3f20, 0xc00436eae0}, {0x0?, 0xc004236770?, 0xc0038a98e0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0042ee700, {0x75c6f7c, 0x9}, 0xc0021dffb0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0042ee700, 0x7f2b9cdeb570?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0042ee700, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000bf6000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 04:07:01.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 04:07:01.813: INFO: Output of kubectl describe svc: Nov 26 04:07:01.813: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.67.129 --kubeconfig=/workspace/.kube/config --namespace=esipp-7609 describe svc --namespace=esipp-7609' Nov 26 04:07:01.935: INFO: rc: 1 Nov 26 04:07:01.935: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:07:01.936 STEP: Collecting events from namespace "esipp-7609". 11/26/22 04:07:01.936 Nov 26 04:07:01.983: INFO: Unexpected error: failed to list events in namespace "esipp-7609": <*url.Error | 0xc0044b0000>: { Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/esipp-7609/events", Err: <*net.OpError | 0xc00423a0f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0015202a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00383e100>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:07:01.983: FAIL: failed to list events in namespace "esipp-7609": Get "https://35.230.67.129/api/v1/namespaces/esipp-7609/events": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00459c5c0, {0xc004236770, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc001515d40}, {0xc004236770, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00459c650?, {0xc004236770?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000bf6000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000ecbc10?, 0xc004285f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc004285f40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000ecbc10?, 0x2622c40?}, {0xae73300?, 0xc004285f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-7609" for this suite. 11/26/22 04:07:01.984 Nov 26 04:07:02.023: FAIL: Couldn't delete ns: "esipp-7609": Delete "https://35.230.67.129/api/v1/namespaces/esipp-7609": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/esipp-7609", Err:(*net.OpError)(0xc003860000)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000bf6000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000ecbb30?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000ecbb30?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfrom\spods$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0013a4690) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:17:34.838 Nov 26 04:17:34.838: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 04:17:34.84 ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:34.879: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:36.919: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:38.919: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:40.919: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:42.919: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:44.920: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:46.919: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:48.919: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:50.919: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:52.919: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:54.918: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:56.919: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:58.919: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:00.920: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:02.918: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:04.919: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:04.959: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:04.959: INFO: Unexpected error: <*errors.errorString | 0xc0001fda30>: { s: "timed out waiting for the condition", } Nov 26 04:18:04.959: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0013a4690) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 04:18:04.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready ERROR: get pod list in provisioning-692-2640: Get "https://35.230.67.129/api/v1/namespaces/provisioning-692-2640/pods": dial tcp 35.230.67.129:443: connect: connection refused [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:18:04.999 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sTCP\sservice\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000bce4b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:16:36.761 Nov 26 04:16:36.761: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 04:16:36.762 Nov 26 04:16:36.802: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:38.842: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:40.842: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:42.843: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:44.842: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:46.842: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:48.842: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:50.842: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:52.842: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:54.841: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:56.843: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:58.843: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:00.842: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:02.842: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:04.842: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:06.842: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:06.881: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:06.881: INFO: Unexpected error: <*errors.errorString | 0xc00017da30>: { s: "timed out waiting for the condition", } Nov 26 04:17:06.881: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000bce4b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 04:17:06.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:17:06.921 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sUDP\sservice\s\[Slow\]$'
test/e2e/network/service.go:604 k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:604 +0x17b k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 +0xe09 There were additional failures detected after the initial failure: [FAILED] Nov 26 04:21:26.007: failed to list events in namespace "loadbalancers-3091": Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/events": dial tcp 35.230.67.129:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 04:21:26.048: Couldn't delete ns: "loadbalancers-3091": Delete "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/loadbalancers-3091", Err:(*net.OpError)(0xc00423a140)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:07:02.029 Nov 26 04:07:02.030: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 04:07:02.032 Nov 26 04:07:02.071: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:04.112: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:06.112: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:08.112: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:10.112: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:12.112: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:14.112: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:16.112: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:18.112: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:20.112: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:22.111: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:24.111: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:26.112: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:28.112: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:07:30.111: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:08:32.768 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:08:32.849 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to change the type and ports of a UDP service [Slow] test/e2e/network/loadbalancer.go:287 Nov 26 04:08:33.663: INFO: namespace for TCP test: loadbalancers-3091 STEP: creating a UDP service mutability-test with type=ClusterIP in namespace loadbalancers-3091 11/26/22 04:08:33.817 Nov 26 04:08:33.967: INFO: service port UDP: 80 STEP: creating a pod to be part of the UDP service mutability-test 11/26/22 04:08:33.967 Nov 26 04:08:34.087: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 04:08:34.222: INFO: Found 0/1 pods - will retry Nov 26 04:08:36.264: INFO: Found all 1 pods Nov 26 04:08:36.264: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [mutability-test-wwdrh] Nov 26 04:08:36.265: INFO: Waiting up to 2m0s for pod "mutability-test-wwdrh" in namespace "loadbalancers-3091" to be "running and ready" Nov 26 04:08:36.305: INFO: Pod "mutability-test-wwdrh": Phase="Pending", Reason="", readiness=false. Elapsed: 40.563332ms Nov 26 04:08:36.305: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-wwdrh' on 'bootstrap-e2e-minion-group-qxpt' to be 'Running' but was 'Pending' Nov 26 04:08:38.360: INFO: Pod "mutability-test-wwdrh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095867064s Nov 26 04:08:38.360: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-wwdrh' on 'bootstrap-e2e-minion-group-qxpt' to be 'Running' but was 'Pending' Nov 26 04:08:40.361: INFO: Pod "mutability-test-wwdrh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096837204s Nov 26 04:08:40.361: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-wwdrh' on 'bootstrap-e2e-minion-group-qxpt' to be 'Running' but was 'Pending' Nov 26 04:08:42.372: INFO: Pod "mutability-test-wwdrh": Phase="Running", Reason="", readiness=true. Elapsed: 6.107172642s Nov 26 04:08:42.372: INFO: Pod "mutability-test-wwdrh" satisfied condition "running and ready" Nov 26 04:08:42.372: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [mutability-test-wwdrh] STEP: changing the UDP service to type=NodePort 11/26/22 04:08:42.372 Nov 26 04:08:42.530: INFO: UDP node port: 30013 STEP: hitting the UDP service's NodePort 11/26/22 04:08:42.53 Nov 26 04:08:42.530: INFO: Poking udp://34.127.126.198:30013 Nov 26 04:08:42.571: INFO: Poke("udp://34.127.126.198:30013"): read udp 10.60.145.190:49776->34.127.126.198:30013: read: connection refused Nov 26 04:08:44.572: INFO: Poking udp://34.127.126.198:30013 Nov 26 04:08:44.614: INFO: Poke("udp://34.127.126.198:30013"): success STEP: creating a static load balancer IP 11/26/22 04:08:44.614 Nov 26 04:08:46.607: INFO: Allocated static load balancer IP: 35.247.10.22 STEP: changing the UDP service to type=LoadBalancer 11/26/22 04:08:46.607 STEP: demoting the static IP to ephemeral 11/26/22 04:08:46.713 STEP: waiting for the UDP service to have a load balancer 11/26/22 04:08:48.281 Nov 26 04:08:48.281: INFO: Waiting up to 15m0s for service "mutability-test" to have a LoadBalancer Nov 26 04:09:58.391: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:00.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:02.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:04.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:06.389: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:08.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:10.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:12.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:14.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:16.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:18.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:20.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:22.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:24.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:26.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:28.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:30.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:32.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:34.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:36.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:38.391: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:40.389: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:42.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:44.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:46.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:48.389: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:50.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:52.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:10:54.391: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:00.661: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused - error from a previous attempt: read tcp 10.60.145.190:51874->35.230.67.129:443: read: connection reset by peer Nov 26 04:11:02.391: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:04.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:06.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:08.391: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:10.391: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:12.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:14.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:16.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:18.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:20.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:22.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:24.389: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:26.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:28.391: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:30.391: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:32.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:34.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:36.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:38.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:11:40.390: INFO: Retrying .... error trying to get Service mutability-test: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/services/mutability-test": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m31.493s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 4m45.241s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0009fd638, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00375d200?, 0xc004599bb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001635160?, 0x7fa7740?, 0xc0001fe680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003860aa0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003860aa0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m51.494s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m20.002s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 5m5.243s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0009fd638, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00375d200?, 0xc004599bb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001635160?, 0x7fa7740?, 0xc0001fe680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003860aa0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003860aa0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 7m11.497s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m40.005s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 5m25.245s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0009fd638, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00375d200?, 0xc004599bb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001635160?, 0x7fa7740?, 0xc0001fe680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003860aa0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003860aa0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 7m31.499s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m0.007s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 5m45.247s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0009fd638, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00375d200?, 0xc004599bb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001635160?, 0x7fa7740?, 0xc0001fe680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003860aa0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003860aa0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 7m51.501s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m20.008s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 6m5.249s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0009fd638, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00375d200?, 0xc004599bb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001635160?, 0x7fa7740?, 0xc0001fe680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003860aa0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003860aa0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 8m11.503s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m40.011s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 6m25.252s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0009fd638, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00375d200?, 0xc004599bb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001635160?, 0x7fa7740?, 0xc0001fe680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003860aa0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003860aa0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 8m31.505s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 7m0.013s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 6m45.254s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0009fd638, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00375d200?, 0xc004599bb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001635160?, 0x7fa7740?, 0xc0001fe680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003860aa0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003860aa0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:15:42.393: INFO: UDP load balancer: 35.247.10.22 STEP: hitting the UDP service's NodePort 11/26/22 04:15:42.393 Nov 26 04:15:42.394: INFO: Poking udp://34.127.126.198:30013 Nov 26 04:15:42.436: INFO: Poke("udp://34.127.126.198:30013"): success STEP: hitting the UDP service's LoadBalancer 11/26/22 04:15:42.436 Nov 26 04:15:42.436: INFO: Poking udp://35.247.10.22:80 Nov 26 04:15:45.436: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:32832->35.247.10.22:80: i/o timeout Nov 26 04:15:47.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:15:50.438: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:41034->35.247.10.22:80: i/o timeout Nov 26 04:15:51.437: INFO: Poking udp://35.247.10.22:80 ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 8m51.508s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 7m20.016s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 11.101s) test/e2e/network/loadbalancer.go:392 Spec Goroutine goroutine 411 [IO wait] internal/poll.runtime_pollWait(0x7f2ba0911118, 0x72) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc004324000?, 0xc00437077a?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc004324000, {0xc00437077a, 0x6, 0x6}) /usr/local/go/src/internal/poll/fd_unix.go:167 net.(*netFD).Read(0xc004324000, {0xc00437077a?, 0xc001137818?, 0x2671252?}) /usr/local/go/src/net/fd_posix.go:55 net.(*conn).Read(0xc00160a170, {0xc00437077a?, 0xae40400?, 0xae40400?}) /usr/local/go/src/net/net.go:183 > k8s.io/kubernetes/test/e2e/network.pokeUDP({0xc00152ec80, 0xc}, 0x50, {0x75ca3e8, 0xa}, 0xc001137a70) test/e2e/network/service.go:562 > k8s.io/kubernetes/test/e2e/network.testReachableUDP.func1() test/e2e/network/service.go:593 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x7fadb00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0009fc7b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0016609f0?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc00152ec80, 0xc}, 0x50, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:393 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:15:54.437: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:34252->35.247.10.22:80: i/o timeout Nov 26 04:15:55.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:15:55.477: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:59909->35.247.10.22:80: read: connection refused Nov 26 04:15:57.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:15:57.476: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:57398->35.247.10.22:80: read: connection refused Nov 26 04:15:59.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:16:02.437: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:36228->35.247.10.22:80: i/o timeout Nov 26 04:16:03.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:16:03.477: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:53804->35.247.10.22:80: read: connection refused Nov 26 04:16:05.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:16:08.438: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:49368->35.247.10.22:80: i/o timeout Nov 26 04:16:09.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:16:09.478: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:37124->35.247.10.22:80: read: connection refused Nov 26 04:16:11.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:16:11.477: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:50199->35.247.10.22:80: read: connection refused Nov 26 04:16:13.437: INFO: Poking udp://35.247.10.22:80 ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 9m11.512s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 7m40.02s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 31.106s) test/e2e/network/loadbalancer.go:392 Spec Goroutine goroutine 411 [IO wait] internal/poll.runtime_pollWait(0x7f2b9c4933a0, 0x72) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc001090080?, 0xc00389deca?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc001090080, {0xc00389deca, 0x6, 0x6}) /usr/local/go/src/internal/poll/fd_unix.go:167 net.(*netFD).Read(0xc001090080, {0xc00389deca?, 0xc001137818?, 0x2671252?}) /usr/local/go/src/net/fd_posix.go:55 net.(*conn).Read(0xc000a623b0, {0xc00389deca?, 0xae40400?, 0xae40400?}) /usr/local/go/src/net/net.go:183 > k8s.io/kubernetes/test/e2e/network.pokeUDP({0xc00152ec80, 0xc}, 0x50, {0x75ca3e8, 0xa}, 0xc001137a70) test/e2e/network/service.go:562 > k8s.io/kubernetes/test/e2e/network.testReachableUDP.func1() test/e2e/network/service.go:593 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x7fadb00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0009fc7b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0016609f0?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc00152ec80, 0xc}, 0x50, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:393 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:16:16.438: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:60033->35.247.10.22:80: i/o timeout Nov 26 04:16:17.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:16:17.478: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:38261->35.247.10.22:80: read: connection refused Nov 26 04:16:19.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:16:19.476: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:57652->35.247.10.22:80: read: connection refused Nov 26 04:16:21.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:16:21.476: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:41533->35.247.10.22:80: read: connection refused Nov 26 04:16:23.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:16:23.476: INFO: Poke("udp://35.247.10.22:80"): read udp 10.60.145.190:43643->35.247.10.22:80: read: connection refused Nov 26 04:16:25.437: INFO: Poking udp://35.247.10.22:80 Nov 26 04:16:25.479: INFO: Poke("udp://35.247.10.22:80"): success STEP: changing the UDP service's NodePort 11/26/22 04:16:25.479 Nov 26 04:16:25.652: INFO: UDP node port: 30014 STEP: hitting the UDP service's new NodePort 11/26/22 04:16:25.652 Nov 26 04:16:25.653: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:25.693: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:35009->34.127.126.198:30014: read: connection refused Nov 26 04:16:27.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:27.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:36255->34.127.126.198:30014: read: connection refused Nov 26 04:16:29.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:29.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:52433->34.127.126.198:30014: read: connection refused Nov 26 04:16:31.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:31.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:36586->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 9m31.514s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 8m0.022s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 7.891s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:16:33.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:33.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:37338->34.127.126.198:30014: read: connection refused Nov 26 04:16:35.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:35.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:47012->34.127.126.198:30014: read: connection refused Nov 26 04:16:37.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:37.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:50105->34.127.126.198:30014: read: connection refused Nov 26 04:16:39.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:39.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:53846->34.127.126.198:30014: read: connection refused Nov 26 04:16:41.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:41.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:58586->34.127.126.198:30014: read: connection refused Nov 26 04:16:43.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:43.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:48286->34.127.126.198:30014: read: connection refused Nov 26 04:16:45.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:45.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:60643->34.127.126.198:30014: read: connection refused Nov 26 04:16:47.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:47.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:42617->34.127.126.198:30014: read: connection refused Nov 26 04:16:49.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:49.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:59923->34.127.126.198:30014: read: connection refused Nov 26 04:16:51.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:51.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:43368->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 9m51.516s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 8m20.024s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 27.893s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:16:53.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:53.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:53535->34.127.126.198:30014: read: connection refused Nov 26 04:16:55.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:55.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:54128->34.127.126.198:30014: read: connection refused Nov 26 04:16:57.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:57.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:45695->34.127.126.198:30014: read: connection refused Nov 26 04:16:59.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:16:59.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:39277->34.127.126.198:30014: read: connection refused Nov 26 04:17:01.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:01.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:36484->34.127.126.198:30014: read: connection refused Nov 26 04:17:03.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:03.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:50948->34.127.126.198:30014: read: connection refused Nov 26 04:17:05.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:05.741: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:40608->34.127.126.198:30014: read: connection refused Nov 26 04:17:07.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:07.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:54129->34.127.126.198:30014: read: connection refused Nov 26 04:17:09.695: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:09.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:51207->34.127.126.198:30014: read: connection refused Nov 26 04:17:11.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:11.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:37024->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 10m11.518s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 8m40.026s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 47.895s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:17:13.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:13.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:43116->34.127.126.198:30014: read: connection refused Nov 26 04:17:15.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:15.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:41519->34.127.126.198:30014: read: connection refused Nov 26 04:17:17.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:17.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:44469->34.127.126.198:30014: read: connection refused Nov 26 04:17:19.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:19.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:55579->34.127.126.198:30014: read: connection refused Nov 26 04:17:21.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:21.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:54641->34.127.126.198:30014: read: connection refused Nov 26 04:17:23.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:23.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:42715->34.127.126.198:30014: read: connection refused Nov 26 04:17:25.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:25.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:50843->34.127.126.198:30014: read: connection refused Nov 26 04:17:27.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:27.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:38950->34.127.126.198:30014: read: connection refused Nov 26 04:17:29.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:29.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:38716->34.127.126.198:30014: read: connection refused Nov 26 04:17:31.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:31.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:50205->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 10m31.527s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 9m0.034s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 1m7.903s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:17:33.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:33.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:49604->34.127.126.198:30014: read: connection refused Nov 26 04:17:35.695: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:35.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:48477->34.127.126.198:30014: read: connection refused Nov 26 04:17:37.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:37.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:40920->34.127.126.198:30014: read: connection refused Nov 26 04:17:39.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:39.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:38021->34.127.126.198:30014: read: connection refused Nov 26 04:17:41.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:41.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:54727->34.127.126.198:30014: read: connection refused Nov 26 04:17:43.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:43.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:59695->34.127.126.198:30014: read: connection refused Nov 26 04:17:45.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:45.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:42484->34.127.126.198:30014: read: connection refused Nov 26 04:17:47.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:47.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:39141->34.127.126.198:30014: read: connection refused Nov 26 04:17:49.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:49.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:60486->34.127.126.198:30014: read: connection refused Nov 26 04:17:51.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:51.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:36107->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 10m51.529s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 9m20.037s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 1m27.906s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:17:53.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:53.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:40154->34.127.126.198:30014: read: connection refused Nov 26 04:17:55.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:55.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:54250->34.127.126.198:30014: read: connection refused Nov 26 04:17:57.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:57.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:44816->34.127.126.198:30014: read: connection refused Nov 26 04:17:59.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:17:59.735: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:35106->34.127.126.198:30014: read: connection refused Nov 26 04:18:01.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:01.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:56785->34.127.126.198:30014: read: connection refused Nov 26 04:18:03.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:03.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:35614->34.127.126.198:30014: read: connection refused Nov 26 04:18:05.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:05.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:59005->34.127.126.198:30014: read: connection refused Nov 26 04:18:07.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:07.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:57216->34.127.126.198:30014: read: connection refused Nov 26 04:18:09.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:09.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:33390->34.127.126.198:30014: read: connection refused Nov 26 04:18:11.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:11.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:37706->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 11m11.531s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 9m40.039s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 1m47.908s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:18:13.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:13.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:45971->34.127.126.198:30014: read: connection refused Nov 26 04:18:15.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:15.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:40800->34.127.126.198:30014: read: connection refused Nov 26 04:18:17.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:17.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:34244->34.127.126.198:30014: read: connection refused Nov 26 04:18:19.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:19.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:60997->34.127.126.198:30014: read: connection refused Nov 26 04:18:21.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:21.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:41815->34.127.126.198:30014: read: connection refused Nov 26 04:18:23.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:23.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:34589->34.127.126.198:30014: read: connection refused Nov 26 04:18:25.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:25.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:55387->34.127.126.198:30014: read: connection refused Nov 26 04:18:27.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:27.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:36221->34.127.126.198:30014: read: connection refused Nov 26 04:18:29.695: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:29.736: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:57286->34.127.126.198:30014: read: connection refused Nov 26 04:18:31.693: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:31.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:58560->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 11m31.534s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 10m0.042s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 2m7.91s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:18:33.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:33.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:39363->34.127.126.198:30014: read: connection refused Nov 26 04:18:35.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:35.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:44694->34.127.126.198:30014: read: connection refused Nov 26 04:18:37.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:37.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:48336->34.127.126.198:30014: read: connection refused Nov 26 04:18:39.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:39.740: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:47073->34.127.126.198:30014: read: connection refused Nov 26 04:18:41.695: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:41.762: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:47588->34.127.126.198:30014: read: connection refused Nov 26 04:18:43.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:43.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:57377->34.127.126.198:30014: read: connection refused Nov 26 04:18:45.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:45.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:54227->34.127.126.198:30014: read: connection refused Nov 26 04:18:47.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:47.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:39337->34.127.126.198:30014: read: connection refused Nov 26 04:18:49.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:49.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:52586->34.127.126.198:30014: read: connection refused Nov 26 04:18:51.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:51.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:60944->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 11m51.537s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 10m20.045s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 2m27.914s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:18:53.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:53.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:52224->34.127.126.198:30014: read: connection refused Nov 26 04:18:55.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:55.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:51452->34.127.126.198:30014: read: connection refused Nov 26 04:18:57.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:57.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:33618->34.127.126.198:30014: read: connection refused Nov 26 04:18:59.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:18:59.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:49920->34.127.126.198:30014: read: connection refused Nov 26 04:19:01.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:01.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:48640->34.127.126.198:30014: read: connection refused Nov 26 04:19:03.699: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:03.738: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:41865->34.127.126.198:30014: read: connection refused Nov 26 04:19:05.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:05.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:50020->34.127.126.198:30014: read: connection refused Nov 26 04:19:07.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:07.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:41243->34.127.126.198:30014: read: connection refused Nov 26 04:19:09.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:09.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:35567->34.127.126.198:30014: read: connection refused Nov 26 04:19:11.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:11.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:55010->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 12m11.54s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 10m40.048s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 2m47.917s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:19:13.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:13.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:37726->34.127.126.198:30014: read: connection refused Nov 26 04:19:15.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:15.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:46994->34.127.126.198:30014: read: connection refused Nov 26 04:19:17.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:17.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:52577->34.127.126.198:30014: read: connection refused Nov 26 04:19:19.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:19.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:59070->34.127.126.198:30014: read: connection refused Nov 26 04:19:21.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:21.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:38012->34.127.126.198:30014: read: connection refused Nov 26 04:19:23.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:23.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:53703->34.127.126.198:30014: read: connection refused Nov 26 04:19:25.695: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:25.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:56282->34.127.126.198:30014: read: connection refused Nov 26 04:19:27.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:27.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:45733->34.127.126.198:30014: read: connection refused Nov 26 04:19:29.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:29.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:57001->34.127.126.198:30014: read: connection refused Nov 26 04:19:31.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:31.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:53304->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 12m31.542s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 11m0.05s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 3m7.919s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:19:33.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:33.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:34035->34.127.126.198:30014: read: connection refused Nov 26 04:19:35.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:35.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:34444->34.127.126.198:30014: read: connection refused Nov 26 04:19:37.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:37.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:40102->34.127.126.198:30014: read: connection refused Nov 26 04:19:39.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:39.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:50838->34.127.126.198:30014: read: connection refused Nov 26 04:19:41.693: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:41.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:50161->34.127.126.198:30014: read: connection refused Nov 26 04:19:43.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:43.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:56318->34.127.126.198:30014: read: connection refused Nov 26 04:19:45.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:45.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:52942->34.127.126.198:30014: read: connection refused Nov 26 04:19:47.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:47.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:42292->34.127.126.198:30014: read: connection refused Nov 26 04:19:49.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:49.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:35083->34.127.126.198:30014: read: connection refused Nov 26 04:19:51.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:51.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:49659->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 12m51.545s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 11m20.053s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 3m27.922s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:19:53.693: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:53.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:37825->34.127.126.198:30014: read: connection refused Nov 26 04:19:55.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:55.743: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:43565->34.127.126.198:30014: read: connection refused Nov 26 04:19:57.695: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:57.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:58423->34.127.126.198:30014: read: connection refused Nov 26 04:19:59.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:19:59.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:35456->34.127.126.198:30014: read: connection refused Nov 26 04:20:01.693: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:01.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:51694->34.127.126.198:30014: read: connection refused Nov 26 04:20:03.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:03.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:48916->34.127.126.198:30014: read: connection refused Nov 26 04:20:05.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:05.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:53372->34.127.126.198:30014: read: connection refused Nov 26 04:20:07.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:07.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:39239->34.127.126.198:30014: read: connection refused Nov 26 04:20:09.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:09.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:41535->34.127.126.198:30014: read: connection refused Nov 26 04:20:11.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:11.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:41675->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 13m11.55s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 11m40.058s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 3m47.927s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:20:13.695: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:13.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:41119->34.127.126.198:30014: read: connection refused Nov 26 04:20:15.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:15.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:40511->34.127.126.198:30014: read: connection refused Nov 26 04:20:17.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:17.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:57824->34.127.126.198:30014: read: connection refused Nov 26 04:20:19.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:19.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:59754->34.127.126.198:30014: read: connection refused Nov 26 04:20:21.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:21.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:41373->34.127.126.198:30014: read: connection refused Nov 26 04:20:23.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:23.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:43265->34.127.126.198:30014: read: connection refused Nov 26 04:20:25.695: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:25.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:45717->34.127.126.198:30014: read: connection refused Nov 26 04:20:27.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:27.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:42519->34.127.126.198:30014: read: connection refused Nov 26 04:20:29.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:29.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:52166->34.127.126.198:30014: read: connection refused Nov 26 04:20:31.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:31.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:46180->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 13m31.554s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 12m0.062s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 4m7.931s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:20:33.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:33.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:55517->34.127.126.198:30014: read: connection refused Nov 26 04:20:35.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:35.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:45586->34.127.126.198:30014: read: connection refused Nov 26 04:20:37.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:37.735: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:34138->34.127.126.198:30014: read: connection refused Nov 26 04:20:39.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:39.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:40620->34.127.126.198:30014: read: connection refused Nov 26 04:20:41.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:41.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:51288->34.127.126.198:30014: read: connection refused Nov 26 04:20:43.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:43.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:45188->34.127.126.198:30014: read: connection refused Nov 26 04:20:45.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:45.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:44371->34.127.126.198:30014: read: connection refused Nov 26 04:20:47.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:47.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:42579->34.127.126.198:30014: read: connection refused Nov 26 04:20:49.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:49.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:55127->34.127.126.198:30014: read: connection refused Nov 26 04:20:51.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:51.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:57852->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 13m51.562s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 12m20.07s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 4m27.939s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:20:53.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:53.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:43251->34.127.126.198:30014: read: connection refused Nov 26 04:20:55.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:55.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:33730->34.127.126.198:30014: read: connection refused Nov 26 04:20:57.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:57.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:57483->34.127.126.198:30014: read: connection refused Nov 26 04:20:59.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:20:59.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:58813->34.127.126.198:30014: read: connection refused Nov 26 04:21:01.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:01.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:49061->34.127.126.198:30014: read: connection refused Nov 26 04:21:03.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:03.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:35548->34.127.126.198:30014: read: connection refused Nov 26 04:21:05.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:05.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:50083->34.127.126.198:30014: read: connection refused Nov 26 04:21:07.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:07.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:34519->34.127.126.198:30014: read: connection refused Nov 26 04:21:09.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:09.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:43929->34.127.126.198:30014: read: connection refused Nov 26 04:21:11.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:11.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:51121->34.127.126.198:30014: read: connection refused ------------------------------ Progress Report for Ginkgo Process #14 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 14m11.565s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 12m40.073s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 4m47.942s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 411 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00073d920, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0001fe680?, 0xc001137cb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0014f1878?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2623318, 0x39e3d00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:21:13.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:13.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:50884->34.127.126.198:30014: read: connection refused Nov 26 04:21:15.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:15.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:34693->34.127.126.198:30014: read: connection refused Nov 26 04:21:17.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:17.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:55074->34.127.126.198:30014: read: connection refused Nov 26 04:21:19.693: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:19.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:54594->34.127.126.198:30014: read: connection refused Nov 26 04:21:21.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:21.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:34305->34.127.126.198:30014: read: connection refused Nov 26 04:21:23.695: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:23.734: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:49634->34.127.126.198:30014: read: connection refused Nov 26 04:21:25.694: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:25.733: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:59495->34.127.126.198:30014: read: connection refused Nov 26 04:21:25.733: INFO: Poking udp://34.127.126.198:30014 Nov 26 04:21:25.772: INFO: Poke("udp://34.127.126.198:30014"): read udp 10.60.145.190:53664->34.127.126.198:30014: read: connection refused Nov 26 04:21:25.773: FAIL: Could not reach UDP service through 34.127.126.198:30014 after 5m0s: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc000753ad0, 0xe}, 0x753e, 0x0?) test/e2e/network/service.go:604 +0x17b k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 +0xe09 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 04:21:25.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 26 04:21:25.813: INFO: Output of kubectl describe svc: Nov 26 04:21:25.813: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.67.129 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-3091 describe svc --namespace=loadbalancers-3091' Nov 26 04:21:25.967: INFO: rc: 1 Nov 26 04:21:25.967: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:21:25.967 STEP: Collecting events from namespace "loadbalancers-3091". 11/26/22 04:21:25.967 Nov 26 04:21:26.007: INFO: Unexpected error: failed to list events in namespace "loadbalancers-3091": <*url.Error | 0xc001520f60>: { Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/events", Err: <*net.OpError | 0xc0013a77c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0042076e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00383f880>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:21:26.007: FAIL: failed to list events in namespace "loadbalancers-3091": Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091/events": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00116c5c0, {0xc0004b7cf8, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0021fa340}, {0xc0004b7cf8, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00116c650?, {0xc0004b7cf8?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000bf64b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00132e320?, 0xc001ea2f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00132e320?, 0x7fadfa0?}, {0xae73300?, 0xc001ea2f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-3091" for this suite. 11/26/22 04:21:26.008 Nov 26 04:21:26.048: FAIL: Couldn't delete ns: "loadbalancers-3091": Delete "https://35.230.67.129/api/v1/namespaces/loadbalancers-3091": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/loadbalancers-3091", Err:(*net.OpError)(0xc00423a140)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000bf64b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00132e0b0?, 0xc004280fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00132e0b0?, 0x0?}, {0xae73300?, 0x5?, 0xc0021ec7e0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\screate\san\sinternal\stype\sload\sbalancer\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d7e4b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:20:08.965 Nov 26 04:20:08.965: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 04:20:08.967 Nov 26 04:20:09.007: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:11.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:13.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:15.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:17.048: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:19.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:21.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:23.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:25.048: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:27.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:29.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:31.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:33.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:35.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:37.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:39.047: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:39.087: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:39.087: INFO: Unexpected error: <*errors.errorString | 0xc000115d30>: { s: "timed out waiting for the condition", } Nov 26 04:20:39.087: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d7e4b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 04:20:39.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:20:39.127 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\sswitch\ssession\saffinity\sfor\sLoadBalancer\sservice\swith\sESIPP\soff\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000c424b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:16:36.367 Nov 26 04:16:36.367: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 04:16:36.369 Nov 26 04:16:36.409: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:38.450: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:40.450: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:42.450: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:44.449: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:46.449: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:48.450: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:50.450: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:52.449: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:54.450: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:56.450: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:58.449: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:00.449: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:02.449: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:04.449: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:06.449: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:06.488: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:06.488: INFO: Unexpected error: <*errors.errorString | 0xc0001fda30>: { s: "timed out waiting for the condition", } Nov 26 04:17:06.488: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000c424b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 04:17:06.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:17:06.528 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\sswitch\ssession\saffinity\sfor\sLoadBalancer\sservice\swith\sESIPP\son\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000df84b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:19:46.327 Nov 26 04:19:46.327: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 04:19:46.329 Nov 26 04:19:46.368: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:48.409: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:50.409: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:52.408: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:54.409: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:58.273: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": read tcp 10.60.145.190:58220->35.230.67.129:443: read: connection reset by peer Nov 26 04:19:58.408: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:00.408: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:02.409: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:04.409: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:06.412: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:08.408: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:10.408: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:12.412: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:14.409: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:16.408: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:16.448: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:16.448: INFO: Unexpected error: <*errors.errorString | 0xc00017da30>: { s: "timed out waiting for the condition", } Nov 26 04:20:16.448: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000df84b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 04:20:16.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:20:16.488 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shandle\sload\sbalancer\scleanup\sfinalizer\sfor\sservice\s\[Slow\]$'
test/e2e/network/loadbalancer.go:834 k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:834 +0x130from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:13:53.077 Nov 26 04:13:53.077: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 04:13:53.079 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:14:47.111 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:14:47.198 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should handle load balancer cleanup finalizer for service [Slow] test/e2e/network/loadbalancer.go:818 STEP: Create load balancer service 11/26/22 04:14:47.948 STEP: Wait for load balancer to serve traffic 11/26/22 04:14:48.039 Nov 26 04:14:48.083: INFO: Waiting up to 15m0s for service "lb-finalizer" to have a LoadBalancer Nov 26 04:16:34.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:36.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:38.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:40.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:42.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:44.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:46.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:48.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:50.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:52.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:54.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:56.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:58.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:00.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:02.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:04.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:06.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:08.170: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:10.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:12.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:14.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:16.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:18.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:20.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:22.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:24.171: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:26.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:28.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:30.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:32.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:34.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:36.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:38.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:40.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:42.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:44.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:46.170: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:48.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:50.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:52.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:54.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:56.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:58.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:00.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:02.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:04.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:06.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:08.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:10.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:12.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:14.175: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:16.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:18.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:20.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:22.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:24.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:26.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:28.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:30.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:32.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:34.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:36.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:38.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:40.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:42.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:44.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:46.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:48.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:50.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:52.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:54.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:56.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:58.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:00.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:02.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:04.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:06.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:08.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:10.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:12.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:14.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:16.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:18.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:20.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:22.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:24.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:26.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:28.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:30.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:32.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:34.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:36.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:38.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:40.167: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:42.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:44.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:46.167: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 5m54.871s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 4m59.909s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:19:48.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:50.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:52.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:54.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:59.314: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused - error from a previous attempt: read tcp 10.60.145.190:58214->35.230.67.129:443: read: connection reset by peer Nov 26 04:20:00.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:02.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:04.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:06.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 6m14.874s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 5m20.003s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 5m19.912s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:20:08.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:10.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:12.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:14.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:16.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:18.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:20.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:22.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:24.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:26.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 6m34.876s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 5m40.006s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 5m39.914s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:20:28.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:30.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:32.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:34.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:36.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:38.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:40.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:42.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:44.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:46.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 6m54.878s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 6m0.008s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 5m59.916s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:20:48.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:50.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:52.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:54.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:56.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:20:58.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:00.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:02.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:04.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:06.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 7m14.883s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 6m20.012s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 6m19.921s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:21:08.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:10.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:12.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:14.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:16.171: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:18.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:20.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:22.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:24.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:26.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 7m34.885s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 6m40.015s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 6m39.923s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:21:28.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:30.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:32.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:34.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:36.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:38.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:40.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:42.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:44.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:46.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 7m54.888s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 7m0.018s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 6m59.926s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:21:48.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:50.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:52.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:54.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:56.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:21:58.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:00.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:02.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:04.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:06.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:08.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 8m15.323s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 7m20.453s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 7m20.361s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:22:10.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:12.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:14.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:16.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:18.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:20.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:22.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:24.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:26.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:28.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 8m35.327s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 7m40.457s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 7m40.365s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:22:30.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:32.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:34.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:36.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:38.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:40.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:42.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:44.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:46.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:48.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 8m55.332s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 8m0.461s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 8m0.37s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:22:50.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:52.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:54.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:56.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:22:58.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:00.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:02.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:04.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:06.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:08.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 9m15.334s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 8m20.464s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 8m20.372s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:23:10.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:12.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:14.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:16.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:18.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:20.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:22.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:24.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:26.170: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:28.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 9m35.337s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 8m40.466s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 8m40.374s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:23:30.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:32.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:34.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:36.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:38.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:40.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:42.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:44.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:46.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:48.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 9m55.339s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 9m0.469s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 9m0.377s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:23:50.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:52.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:54.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:56.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:23:58.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:00.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:02.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:04.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:06.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:08.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 10m15.342s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 9m20.471s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 9m20.38s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:24:10.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:12.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:14.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:16.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:18.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:20.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:22.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:24.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:26.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:28.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 10m35.345s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 9m40.474s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 9m40.383s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:24:30.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:32.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:34.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:36.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:38.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:40.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:42.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:44.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:46.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:48.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 10m55.348s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 10m0.478s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 10m0.386s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:24:50.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:52.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:54.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:56.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:24:58.168: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:25:00.169: INFO: Retrying .... error trying to get Service lb-finalizer: Get "https://35.230.67.129/api/v1/namespaces/loadbalancers-7618/services/lb-finalizer": dial tcp 35.230.67.129:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 11m15.351s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 10m20.48s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 10m20.389s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 11m35.354s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 10m40.484s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 10m40.392s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 11m55.358s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 11m0.487s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 11m0.396s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 12m15.362s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 11m20.491s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 11m20.399s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 12m35.364s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 11m40.493s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 11m40.402s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 12m55.365s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 12m0.495s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 12m0.403s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 13m15.367s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 12m20.497s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 12m20.405s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 13m35.372s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 12m40.501s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 12m40.41s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 13m55.374s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 13m0.504s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 13m0.412s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 14m15.379s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 13m20.508s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 13m20.416s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 14m35.381s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 13m40.51s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 13m40.419s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 14m55.385s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 14m0.514s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 14m0.423s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 15m15.387s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 14m20.517s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 14m20.425s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow] (Spec Runtime: 15m35.391s) test/e2e/network/loadbalancer.go:818 In [It] (Node Runtime: 14m40.521s) test/e2e/network/loadbalancer.go:818 At [By Step] Wait for load balancer to serve traffic (Step Runtime: 14m40.429s) test/e2e/network/loadbalancer.go:832 Spec Goroutine goroutine 1425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003707968, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002c6b320?, 0xc000de5d58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0038cd7d0?, 0x7fa7740?, 0xc00020e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004af1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004af1130, 0xc002328820?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:833 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000807380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 04:29:48.213: INFO: Unexpected error: <*fmt.wrapError | 0xc001388600>: { msg: "timed out waiting for service \"lb-finalizer\" to have a load balancer: timed out waiting for the condition", err: <*errors.errorString | 0xc00020da10>{ s: "timed out waiting for the condition", }, } Nov 26 04:29:48.213: FAIL: timed out waiting for service "lb-finalizer" to have a load balancer: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:834 +0x130 STEP: Check that service can be deleted with finalizer 11/26/22 04:29:48.213 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 04:29:48.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 26 04:29:48.298: INFO: Output of kubectl describe svc: Nov 26 04:29:48.298: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.67.129 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-7618 describe svc --namespace=loadbalancers-7618' Nov 26 04:29:48.630: INFO: stderr: "" Nov 26 04:29:48.630: INFO: stdout: "Name: lb-finalizer\nNamespace: loadbalancers-7618\nLabels: testid=lb-finalizer-14e6a8ee-e042-4a94-8dde-46947eaba0e4\nAnnotations: <none>\nSelector: testid=lb-finalizer-14e6a8ee-e042-4a94-8dde-46947eaba0e4\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.216.98\nIPs: 10.0.216.98\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nNodePort: <unset> 30744/TCP\nEndpoints: <none>\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 13m service-controller Ensuring load balancer\n" Nov 26 04:29:48.630: INFO: Name: lb-finalizer Namespace: loadbalancers-7618 Labels: testid=lb-finalizer-14e6a8ee-e042-4a94-8dde-46947eaba0e4 Annotations: <none> Selector: testid=lb-finalizer-14e6a8ee-e042-4a94-8dde-46947eaba0e4 Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.216.98 IPs: 10.0.216.98 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30744/TCP Endpoints: <none> Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 13m service-controller Ensuring load balancer [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:29:48.63 STEP: Collecting events from namespace "loadbalancers-7618". 11/26/22 04:29:48.63 STEP: Found 1 events. 11/26/22 04:29:48.672 Nov 26 04:29:48.672: INFO: At 2022-11-26 04:16:30 +0000 UTC - event for lb-finalizer: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 04:29:48.713: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 04:29:48.713: INFO: Nov 26 04:29:48.761: INFO: Logging node info for node bootstrap-e2e-master Nov 26 04:29:48.802: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 760c1d8a-0b99-4fc2-b794-2f7d92be53de 6685 0 2022-11-26 04:04:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 04:25:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:56 +0000 UTC,LastTransitionTime:2022-11-26 04:04:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:25:06 +0000 UTC,LastTransitionTime:2022-11-26 04:04:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:25:06 +0000 UTC,LastTransitionTime:2022-11-26 04:04:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:25:06 +0000 UTC,LastTransitionTime:2022-11-26 04:04:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:25:06 +0000 UTC,LastTransitionTime:2022-11-26 04:04:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.67.129,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1db1e6ecd86ec3076124edcf143d32b8,SystemUUID:1db1e6ec-d86e-c307-6124-edcf143d32b8,BootID:d37249f8-fcc3-445d-9b62-007b3da4145b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 04:29:48.803: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 04:29:48.857: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 04:29:48.900: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 26 04:29:48.900: INFO: Logging node info for node bootstrap-e2e-minion-group-hf2n Nov 26 04:29:48.942: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hf2n b4560199-9a8d-465c-8e47-040369336248 7408 0 2022-11-26 04:04:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hf2n kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-hf2n topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-6693":"bootstrap-e2e-minion-group-hf2n","csi-mock-csi-mock-volumes-2270":"csi-mock-csi-mock-volumes-2270"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 04:16:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 04:25:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 04:29:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-hf2n,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:25:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:25:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:25:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:25:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:25:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:25:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:25:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:46 +0000 UTC,LastTransitionTime:2022-11-26 04:04:46 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:25:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:25:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:25:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:25:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.126.198,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hf2n.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hf2n.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:43494faaeaac761c8a8b57a289b127cf,SystemUUID:43494faa-eaac-761c-8a8b-57a289b127cf,BootID:eee28487-7bcb-4974-ba88-3d521fe377c8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-6912^0c716e2c-6d40-11ed-8f08-5eef2a28a1fb],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-6912^0c716e2c-6d40-11ed-8f08-5eef2a28a1fb,DevicePath:,},},Config:nil,},} Nov 26 04:29:48.943: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hf2n Nov 26 04:29:48.987: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hf2n Nov 26 04:29:49.030: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-hf2n: error trying to reach service: No agent available Nov 26 04:29:49.030: INFO: Logging node info for node bootstrap-e2e-minion-group-qxpt Nov 26 04:29:49.074: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-qxpt 3e62ac5b-dc9d-49e8-94e3-a204fdd36aeb 7312 0 2022-11-26 04:04:45 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-qxpt kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-qxpt topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-1335":"bootstrap-e2e-minion-group-qxpt","csi-hostpath-provisioning-9081":"bootstrap-e2e-minion-group-qxpt","csi-hostpath-provisioning-9821":"bootstrap-e2e-minion-group-qxpt","csi-hostpath-volumemode-2273":"bootstrap-e2e-minion-group-qxpt"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 04:25:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 04:27:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 04:28:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-qxpt,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:25:06 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:25:06 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:25:06 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:25:06 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:25:06 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:25:06 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:25:06 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:56 +0000 UTC,LastTransitionTime:2022-11-26 04:04:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:25:05 +0000 UTC,LastTransitionTime:2022-11-26 04:04:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:25:05 +0000 UTC,LastTransitionTime:2022-11-26 04:04:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:25:05 +0000 UTC,LastTransitionTime:2022-11-26 04:04:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:25:05 +0000 UTC,LastTransitionTime:2022-11-26 04:04:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.120.88,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-qxpt.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-qxpt.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:099dc3b50152230c991041456d403315,SystemUUID:099dc3b5-0152-230c-9910-41456d403315,BootID:079e8ec2-fde9-4e37-907f-be0a19459444,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-1335^fbe2a0ad-6d40-11ed-9187-2eb54d9493d5 kubernetes.io/csi/csi-hostpath-provisioning-9081^0e5c5a49-6d41-11ed-897e-6ae34e55b26d kubernetes.io/csi/csi-hostpath-provisioning-9821^f600f249-6d40-11ed-8817-7af366db9d5d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9081^0e5c5a49-6d41-11ed-897e-6ae34e55b26d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9821^f600f249-6d40-11ed-8817-7af366db9d5d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-1335^fbe2a0ad-6d40-11ed-9187-2eb54d9493d5,DevicePath:,},},Config:nil,},} Nov 26 04:29:49.075: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-qxpt Nov 26 04:29:49.156: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-qxpt Nov 26 04:29:49.201: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-qxpt: error trying to reach service: No agent available Nov 26 04:29:49.201: INFO: Logging node info for node bootstrap-e2e-minion-group-vw8q Nov 26 04:29:49.244: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-vw8q 92c43d97-c454-4272-a2ff-80b54b16ce44 6710 0 2022-11-26 04:04:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-vw8q kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-vw8q topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-7792":"bootstrap-e2e-minion-group-vw8q"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 04:15:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 04:25:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-26 04:25:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-vw8q,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:25:09 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:25:09 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:25:09 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:25:09 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:25:09 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:25:09 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:25:09 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:46 +0000 UTC,LastTransitionTime:2022-11-26 04:04:46 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:25:05 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:25:05 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:25:05 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:25:05 +0000 UTC,LastTransitionTime:2022-11-26 04:04:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.105.38.125,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-vw8q.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-vw8q.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ea42ad218a1f1be71bdfbef5adca0e4,SystemUUID:3ea42ad2-18a1-f1be-71bd-fbef5adca0e4,BootID:f248d9e8-3716-4260-b005-1cc522930f08,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 04:29:49.244: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-vw8q Nov 26 04:29:49.289: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-vw8q Nov 26 04:29:49.333: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-vw8q: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-7618" for this suite. 11/26/22 04:29:49.334
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shave\ssession\saffinity\swork\sfor\sLoadBalancer\sservice\swith\sESIPP\son\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/network/service.go:3978 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x75eccfc?, {0x801de88, 0xc0024fa340}, 0xc002811400, 0x0) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBService(...) test/e2e/network/service.go:3966 k8s.io/kubernetes/test/e2e/network.glob..func19.8() test/e2e/network/loadbalancer.go:776 +0xf0from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:16:13.285 Nov 26 04:16:13.285: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 04:16:13.287 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:16:13.503 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:16:13.62 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should have session affinity work for LoadBalancer service with ESIPP on [Slow] [LinuxOnly] test/e2e/network/loadbalancer.go:769 STEP: creating service in namespace loadbalancers-6626 11/26/22 04:16:13.809 STEP: creating service affinity-lb-esipp in namespace loadbalancers-6626 11/26/22 04:16:13.809 STEP: creating replication controller affinity-lb-esipp in namespace loadbalancers-6626 11/26/22 04:16:14.238 I1126 04:16:14.348389 10161 runners.go:193] Created replication controller with name: affinity-lb-esipp, namespace: loadbalancers-6626, replica count: 3 I1126 04:16:17.499857 10161 runners.go:193] affinity-lb-esipp Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 04:16:20.500028 10161 runners.go:193] affinity-lb-esipp Pods: 3 out of 3 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1126 04:16:23.500684 10161 runners.go:193] affinity-lb-esipp Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 04:16:23.500705 10161 runners.go:193] Logging node info for node bootstrap-e2e-minion-group-vw8q I1126 04:16:23.587583 10161 runners.go:193] Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-vw8q 92c43d97-c454-4272-a2ff-80b54b16ce44 6393 0 2022-11-26 04:04:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-vw8q kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-vw8q topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-7792":"bootstrap-e2e-minion-group-vw8q"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 04:14:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 04:15:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 04:16:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-vw8q,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:46 +0000 UTC,LastTransitionTime:2022-11-26 04:04:46 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:16:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:16:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:16:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:16:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.105.38.125,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-vw8q.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-vw8q.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ea42ad218a1f1be71bdfbef5adca0e4,SystemUUID:3ea42ad2-18a1-f1be-71bd-fbef5adca0e4,BootID:f248d9e8-3716-4260-b005-1cc522930f08,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} I1126 04:16:23.588059 10161 runners.go:193] Logging kubelet events for node bootstrap-e2e-minion-group-vw8q I1126 04:16:23.687293 10161 runners.go:193] Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-vw8q I1126 04:16:23.813169 10161 runners.go:193] Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-vw8q: error trying to reach service: No agent available I1126 04:16:23.888458 10161 runners.go:193] Running kubectl logs on non-ready containers in loadbalancers-6626 Nov 26 04:16:24.278: INFO: Failed to get logs of pod affinity-lb-esipp-f7xxk, container affinity-lb-esipp, err: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-esipp-f7xxk) Nov 26 04:16:24.278: INFO: Logs of loadbalancers-6626/affinity-lb-esipp-f7xxk:affinity-lb-esipp on node bootstrap-e2e-minion-group-qxpt Nov 26 04:16:24.278: INFO: : STARTLOG ENDLOG for container loadbalancers-6626:affinity-lb-esipp-f7xxk:affinity-lb-esipp Nov 26 04:16:24.278: INFO: Unexpected error: failed to create replication controller with service in the namespace: loadbalancers-6626: <*errors.errorString | 0xc0007e55a0>: { s: "1 containers failed which is more than allowed 0", } Nov 26 04:16:24.278: FAIL: failed to create replication controller with service in the namespace: loadbalancers-6626: 1 containers failed which is more than allowed 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x75eccfc?, {0x801de88, 0xc0024fa340}, 0xc002811400, 0x0) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBService(...) test/e2e/network/service.go:3966 k8s.io/kubernetes/test/e2e/network.glob..func19.8() test/e2e/network/loadbalancer.go:776 +0xf0 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 04:16:24.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 26 04:16:24.566: INFO: Output of kubectl describe svc: Nov 26 04:16:24.566: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.67.129 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6626 describe svc --namespace=loadbalancers-6626' Nov 26 04:16:25.066: INFO: stderr: "" Nov 26 04:16:25.066: INFO: stdout: "Name: affinity-lb-esipp\nNamespace: loadbalancers-6626\nLabels: <none>\nAnnotations: <none>\nSelector: name=affinity-lb-esipp\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.108.36\nIPs: 10.0.108.36\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nNodePort: <unset> 31979/TCP\nEndpoints: 10.64.1.92:9376\nSession Affinity: ClientIP\nExternal Traffic Policy: Local\nHealthCheck NodePort: 31673\nEvents: <none>\n" Nov 26 04:16:25.066: INFO: Name: affinity-lb-esipp Namespace: loadbalancers-6626 Labels: <none> Annotations: <none> Selector: name=affinity-lb-esipp Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.108.36 IPs: 10.0.108.36 Port: <unset> 80/TCP TargetPort: 9376/TCP NodePort: <unset> 31979/TCP Endpoints: 10.64.1.92:9376 Session Affinity: ClientIP External Traffic Policy: Local HealthCheck NodePort: 31673 Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:16:25.066 STEP: Collecting events from namespace "loadbalancers-6626". 11/26/22 04:16:25.066 STEP: Found 9 events. 11/26/22 04:16:25.159 Nov 26 04:16:25.159: INFO: At 2022-11-26 04:16:14 +0000 UTC - event for affinity-lb-esipp: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-25fs2 Nov 26 04:16:25.159: INFO: At 2022-11-26 04:16:14 +0000 UTC - event for affinity-lb-esipp: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-f7xxk Nov 26 04:16:25.159: INFO: At 2022-11-26 04:16:14 +0000 UTC - event for affinity-lb-esipp: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-zmjrv Nov 26 04:16:25.159: INFO: At 2022-11-26 04:16:14 +0000 UTC - event for affinity-lb-esipp-25fs2: {default-scheduler } Scheduled: Successfully assigned loadbalancers-6626/affinity-lb-esipp-25fs2 to bootstrap-e2e-minion-group-vw8q Nov 26 04:16:25.159: INFO: At 2022-11-26 04:16:14 +0000 UTC - event for affinity-lb-esipp-f7xxk: {default-scheduler } Scheduled: Successfully assigned loadbalancers-6626/affinity-lb-esipp-f7xxk to bootstrap-e2e-minion-group-qxpt Nov 26 04:16:25.159: INFO: At 2022-11-26 04:16:14 +0000 UTC - event for affinity-lb-esipp-zmjrv: {default-scheduler } Scheduled: Successfully assigned loadbalancers-6626/affinity-lb-esipp-zmjrv to bootstrap-e2e-minion-group-hf2n Nov 26 04:16:25.159: INFO: At 2022-11-26 04:16:16 +0000 UTC - event for affinity-lb-esipp-zmjrv: {kubelet bootstrap-e2e-minion-group-hf2n} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 04:16:25.159: INFO: At 2022-11-26 04:16:16 +0000 UTC - event for affinity-lb-esipp-zmjrv: {kubelet bootstrap-e2e-minion-group-hf2n} Created: Created container affinity-lb-esipp Nov 26 04:16:25.159: INFO: At 2022-11-26 04:16:16 +0000 UTC - event for affinity-lb-esipp-zmjrv: {kubelet bootstrap-e2e-minion-group-hf2n} Started: Started container affinity-lb-esipp Nov 26 04:16:25.252: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 04:16:25.252: INFO: affinity-lb-esipp-25fs2 bootstrap-e2e-minion-group-vw8q Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:16:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:16:23 +0000 UTC ContainersNotReady containers with unready status: [affinity-lb-esipp]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:16:23 +0000 UTC ContainersNotReady containers with unready status: [affinity-lb-esipp]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:16:14 +0000 UTC }] Nov 26 04:16:25.252: INFO: affinity-lb-esipp-f7xxk bootstrap-e2e-minion-group-qxpt Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:16:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:16:14 +0000 UTC ContainersNotReady containers with unready status: [affinity-lb-esipp]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:16:14 +0000 UTC ContainersNotReady containers with unready status: [affinity-lb-esipp]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:16:14 +0000 UTC }] Nov 26 04:16:25.252: INFO: affinity-lb-esipp-zmjrv bootstrap-e2e-minion-group-hf2n Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:16:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:16:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:16:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:16:14 +0000 UTC }] Nov 26 04:16:25.252: INFO: Nov 26 04:16:25.405: INFO: Unable to fetch loadbalancers-6626/affinity-lb-esipp-25fs2/affinity-lb-esipp logs: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-esipp-25fs2) Nov 26 04:16:25.520: INFO: Unable to fetch loadbalancers-6626/affinity-lb-esipp-f7xxk/affinity-lb-esipp logs: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-esipp-f7xxk) Nov 26 04:16:25.609: INFO: Unable to fetch loadbalancers-6626/affinity-lb-esipp-zmjrv/affinity-lb-esipp logs: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-esipp-zmjrv) Nov 26 04:16:25.680: INFO: Logging node info for node bootstrap-e2e-master Nov 26 04:16:25.743: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 760c1d8a-0b99-4fc2-b794-2f7d92be53de 3106 0 2022-11-26 04:04:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 04:11:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:56 +0000 UTC,LastTransitionTime:2022-11-26 04:04:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:11:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:11:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:11:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:11:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.67.129,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1db1e6ecd86ec3076124edcf143d32b8,SystemUUID:1db1e6ec-d86e-c307-6124-edcf143d32b8,BootID:d37249f8-fcc3-445d-9b62-007b3da4145b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 04:16:25.743: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 04:16:25.806: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 04:16:25.886: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 26 04:16:25.886: INFO: Logging node info for node bootstrap-e2e-minion-group-hf2n Nov 26 04:16:25.954: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hf2n b4560199-9a8d-465c-8e47-040369336248 6290 0 2022-11-26 04:04:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hf2n kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-hf2n topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1428":"bootstrap-e2e-minion-group-hf2n","csi-hostpath-provisioning-6693":"bootstrap-e2e-minion-group-hf2n","csi-mock-csi-mock-volumes-2270":"csi-mock-csi-mock-volumes-2270"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 04:14:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 04:16:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 04:16:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-hf2n,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:46 +0000 UTC,LastTransitionTime:2022-11-26 04:04:46 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:16:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:16:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:16:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:16:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.126.198,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hf2n.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hf2n.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:43494faaeaac761c8a8b57a289b127cf,SystemUUID:43494faa-eaac-761c-8a8b-57a289b127cf,BootID:eee28487-7bcb-4974-ba88-3d521fe377c8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-6912^0c716e2c-6d40-11ed-8f08-5eef2a28a1fb],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-6912^0c716e2c-6d40-11ed-8f08-5eef2a28a1fb,DevicePath:,},},Config:nil,},} Nov 26 04:16:25.955: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hf2n Nov 26 04:16:26.013: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hf2n Nov 26 04:16:26.149: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-hf2n: error trying to reach service: No agent available Nov 26 04:16:26.149: INFO: Logging node info for node bootstrap-e2e-minion-group-qxpt Nov 26 04:16:26.276: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-qxpt 3e62ac5b-dc9d-49e8-94e3-a204fdd36aeb 6447 0 2022-11-26 04:04:45 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-qxpt kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-qxpt topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-1335":"bootstrap-e2e-minion-group-qxpt","csi-hostpath-multivolume-5438":"bootstrap-e2e-minion-group-qxpt","csi-hostpath-provisioning-9081":"bootstrap-e2e-minion-group-qxpt","csi-hostpath-volumemode-2273":"bootstrap-e2e-minion-group-qxpt"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 04:14:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 04:16:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 04:16:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-qxpt,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:14:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:14:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:14:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:14:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:14:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:14:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:14:51 +0000 UTC,LastTransitionTime:2022-11-26 04:04:48 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:56 +0000 UTC,LastTransitionTime:2022-11-26 04:04:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:16:21 +0000 UTC,LastTransitionTime:2022-11-26 04:04:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:16:21 +0000 UTC,LastTransitionTime:2022-11-26 04:04:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:16:21 +0000 UTC,LastTransitionTime:2022-11-26 04:04:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:16:21 +0000 UTC,LastTransitionTime:2022-11-26 04:04:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.120.88,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-qxpt.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-qxpt.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:099dc3b50152230c991041456d403315,SystemUUID:099dc3b5-0152-230c-9910-41456d403315,BootID:079e8ec2-fde9-4e37-907f-be0a19459444,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-1335^fbe2a0ad-6d40-11ed-9187-2eb54d9493d5 kubernetes.io/csi/csi-hostpath-multivolume-5438^090c9734-6d41-11ed-917d-829d8d6e1ba7 kubernetes.io/csi/csi-hostpath-multivolume-5438^09d6ec99-6d41-11ed-917d-829d8d6e1ba7 kubernetes.io/csi/csi-hostpath-provisioning-9081^0e5c5a49-6d41-11ed-897e-6ae34e55b26d kubernetes.io/csi/csi-hostpath-provisioning-9821^f600f249-6d40-11ed-8817-7af366db9d5d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-5438^090c9734-6d41-11ed-917d-829d8d6e1ba7,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volumemode-2273^0c018db8-6d41-11ed-ae59-2694f01081b6,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9081^0e5c5a49-6d41-11ed-897e-6ae34e55b26d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9821^f600f249-6d40-11ed-8817-7af366db9d5d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-1335^fbe2a0ad-6d40-11ed-9187-2eb54d9493d5,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-5438^09d6ec99-6d41-11ed-917d-829d8d6e1ba7,DevicePath:,},},Config:nil,},} Nov 26 04:16:26.277: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-qxpt Nov 26 04:16:26.349: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-qxpt Nov 26 04:16:26.455: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-qxpt: error trying to reach service: No agent available Nov 26 04:16:26.455: INFO: Logging node info for node bootstrap-e2e-minion-group-vw8q Nov 26 04:16:26.558: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-vw8q 92c43d97-c454-4272-a2ff-80b54b16ce44 6393 0 2022-11-26 04:04:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-vw8q kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-vw8q topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-7792":"bootstrap-e2e-minion-group-vw8q"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 04:04:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 04:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 04:14:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 04:15:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 04:16:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gke-slow-1-4/us-west1-b/bootstrap-e2e-minion-group-vw8q,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 04:14:42 +0000 UTC,LastTransitionTime:2022-11-26 04:04:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 04:04:46 +0000 UTC,LastTransitionTime:2022-11-26 04:04:46 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 04:16:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 04:16:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 04:16:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 04:16:10 +0000 UTC,LastTransitionTime:2022-11-26 04:04:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.105.38.125,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-vw8q.c.k8s-jkns-gke-slow-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-vw8q.c.k8s-jkns-gke-slow-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3ea42ad218a1f1be71bdfbef5adca0e4,SystemUUID:3ea42ad2-18a1-f1be-71bd-fbef5adca0e4,BootID:f248d9e8-3716-4260-b005-1cc522930f08,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 04:16:26.558: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-vw8q Nov 26 04:16:26.611: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-vw8q Nov 26 04:16:26.678: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-vw8q: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-6626" for this suite. 11/26/22 04:16:26.678
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sonly\sallow\saccess\sfrom\sservice\sloadbalancer\ssource\sranges\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000a6c4b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:18:45.409 Nov 26 04:18:45.410: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 04:18:45.412 ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:45.451: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:47.492: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:49.491: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:51.491: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:53.491: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:55.491: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:57.491: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:18:59.491: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:01.491: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:03.492: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:05.491: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:07.491: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:09.491: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:11.498: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:13.491: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:15.491: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:15.531: INFO: Unexpected error while creating namespace: Post "https://35.230.67.129/api/v1/namespaces": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:19:15.531: INFO: Unexpected error: <*errors.errorString | 0xc000207d40>: { s: "timed out waiting for the condition", } Nov 26 04:19:15.531: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000a6c4b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 04:19:15.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready ERROR: get pod list in multivolume-9805-9024: Get "https://35.230.67.129/api/v1/namespaces/multivolume-9805-9024/pods": dial tcp 35.230.67.129:443: connect: connection refused [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:19:15.572 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\supdate\snodePort\:\shttp\s\[Slow\]$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0032ff0a0, {0x75c6f7c, 0x9}, 0xc0034a2780) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0032ff0a0, 0x7f89546b2e18?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0032ff0a0, 0x3e?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000fcc5a0, {0xc00440df20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.13() test/e2e/network/networking.go:364 +0x51 There were additional failures detected after the initial failure: [FAILED] Nov 26 04:16:34.968: failed to list events in namespace "nettest-7002": Get "https://35.230.67.129/api/v1/namespaces/nettest-7002/events": dial tcp 35.230.67.129:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 04:16:35.009: Couldn't delete ns: "nettest-7002": Delete "https://35.230.67.129/api/v1/namespaces/nettest-7002": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/nettest-7002", Err:(*net.OpError)(0xc0038ada90)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] Networking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:13:51.456 Nov 26 04:13:51.456: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename nettest 11/26/22 04:13:51.458 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:14:46.138 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:14:46.273 [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 [It] should update nodePort: http [Slow] test/e2e/network/networking.go:363 STEP: Performing setup for networking test in namespace nettest-7002 11/26/22 04:14:46.364 STEP: creating a selector 11/26/22 04:14:46.364 STEP: Creating the service pods in kubernetes 11/26/22 04:14:46.364 Nov 26 04:14:46.364: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 04:14:46.792: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "nettest-7002" to be "running and ready" Nov 26 04:14:46.847: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 55.348327ms Nov 26 04:14:46.847: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 04:14:48.896: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104901636s Nov 26 04:14:48.897: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 04:14:50.973: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.181114924s Nov 26 04:14:50.973: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:14:52.903: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.11152113s Nov 26 04:14:52.903: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:14:54.914: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.122726693s Nov 26 04:14:54.914: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:14:56.945: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.153502134s Nov 26 04:14:56.945: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:14:58.904: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.11285616s Nov 26 04:14:58.904: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:00.978: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.186045635s Nov 26 04:15:00.978: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:02.961: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.169749746s Nov 26 04:15:02.961: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:04.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.144635739s Nov 26 04:15:04.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:06.894: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.10264296s Nov 26 04:15:06.894: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:08.933: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.14172885s Nov 26 04:15:08.933: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:10.933: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.14126645s Nov 26 04:15:10.933: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:12.888: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.096328948s Nov 26 04:15:12.888: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:14.888: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.096392189s Nov 26 04:15:14.888: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:16.888: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.096493435s Nov 26 04:15:16.888: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:18.890: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.098689318s Nov 26 04:15:18.890: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:20.907: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.115105057s Nov 26 04:15:20.907: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:22.888: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.096512879s Nov 26 04:15:22.888: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:24.888: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.096877603s Nov 26 04:15:24.888: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:26.888: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.096757552s Nov 26 04:15:26.888: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:28.918: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.126235465s Nov 26 04:15:28.918: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:30.988: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.19685749s Nov 26 04:15:30.988: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:32.889: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.096921105s Nov 26 04:15:32.889: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:34.888: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.096484763s Nov 26 04:15:34.888: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:36.889: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.096945323s Nov 26 04:15:36.889: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:38.890: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.098526892s Nov 26 04:15:38.890: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:40.889: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.097527418s Nov 26 04:15:40.889: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:42.889: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.097274512s Nov 26 04:15:42.889: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:44.889: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.097250061s Nov 26 04:15:44.889: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:46.888: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.096441172s Nov 26 04:15:46.888: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:48.888: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.096588112s Nov 26 04:15:48.888: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:50.924: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.132860147s Nov 26 04:15:50.924: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:52.901: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.108979077s Nov 26 04:15:52.901: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:54.932: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.140198812s Nov 26 04:15:54.932: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:56.922: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.130698692s Nov 26 04:15:56.922: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:15:59.019: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.227746035s Nov 26 04:15:59.019: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:00.926: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.134161125s Nov 26 04:16:00.926: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:02.902: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.110107483s Nov 26 04:16:02.902: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:04.907: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.11510615s Nov 26 04:16:04.907: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:06.904: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.112204218s Nov 26 04:16:06.904: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:08.907: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.115078675s Nov 26 04:16:08.907: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:10.901: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.109127659s Nov 26 04:16:10.901: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:12.914: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.122751849s Nov 26 04:16:12.914: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:14.895: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.103132383s Nov 26 04:16:14.895: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:16.911: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.119028495s Nov 26 04:16:16.911: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:18.907: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.115106837s Nov 26 04:16:18.907: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:20.957: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.165696076s Nov 26 04:16:20.957: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:22.912: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.120236196s Nov 26 04:16:22.912: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:24.927: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.135884917s Nov 26 04:16:24.927: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:26.921: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.129670622s Nov 26 04:16:26.921: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:28.908: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.116770011s Nov 26 04:16:28.908: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:30.931: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.139524445s Nov 26 04:16:30.931: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:32.909: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.117658987s Nov 26 04:16:32.909: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:16:34.888: INFO: Encountered non-retryable error while getting pod nettest-7002/netserver-0: Get "https://35.230.67.129/api/v1/namespaces/nettest-7002/pods/netserver-0": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:34.888: INFO: Unexpected error: <*fmt.wrapError | 0xc0036a15e0>: { msg: "error while waiting for pod nettest-7002/netserver-0 to be running and ready: Get \"https://35.230.67.129/api/v1/namespaces/nettest-7002/pods/netserver-0\": dial tcp 35.230.67.129:443: connect: connection refused", err: <*url.Error | 0xc003976b70>{ Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/nettest-7002/pods/netserver-0", Err: <*net.OpError | 0xc001daa190>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003976b40>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0036a15a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 26 04:16:34.888: FAIL: error while waiting for pod nettest-7002/netserver-0 to be running and ready: Get "https://35.230.67.129/api/v1/namespaces/nettest-7002/pods/netserver-0": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0032ff0a0, {0x75c6f7c, 0x9}, 0xc0034a2780) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0032ff0a0, 0x7f89546b2e18?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0032ff0a0, 0x3e?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000fcc5a0, {0xc00440df20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.13() test/e2e/network/networking.go:364 +0x51 [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 Nov 26 04:16:34.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:16:34.928 STEP: Collecting events from namespace "nettest-7002". 11/26/22 04:16:34.928 Nov 26 04:16:34.968: INFO: Unexpected error: failed to list events in namespace "nettest-7002": <*url.Error | 0xc003cc6840>: { Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/nettest-7002/events", Err: <*net.OpError | 0xc0038ad770>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0036cfb90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00359b5c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:16:34.968: FAIL: failed to list events in namespace "nettest-7002": Get "https://35.230.67.129/api/v1/namespaces/nettest-7002/events": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0005ba5c0, {0xc0033aea30, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc001f80b60}, {0xc0033aea30, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0005ba650?, {0xc0033aea30?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000fcc5a0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc002e99a10?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc002e99a10?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 STEP: Destroying namespace "nettest-7002" for this suite. 11/26/22 04:16:34.969 Nov 26 04:16:35.008: FAIL: Couldn't delete ns: "nettest-7002": Delete "https://35.230.67.129/api/v1/namespaces/nettest-7002": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/nettest-7002", Err:(*net.OpError)(0xc0038ada90)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000fcc5a0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc002e99990?, 0x63696c7065520a0a?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x342d626434612d62?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc002e99990?, 0xa0e7080822317602?}, {0xae73300?, 0x3a315673646c6569?, 0x66227b0fd20a0fd5?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\supdate\snodePort\:\sudp\s\[Slow\]$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc004725b20, {0x75c6f7c, 0x9}, 0xc0021002a0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc004725b20, 0x7f0e6f67f0c0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc004725b20, 0x3e?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000bce5a0, {0xc0000caf20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.15() test/e2e/network/networking.go:395 +0x51 There were additional failures detected after the initial failure: [FAILED] Nov 26 04:09:59.727: failed to list events in namespace "nettest-5221": Get "https://35.230.67.129/api/v1/namespaces/nettest-5221/events": dial tcp 35.230.67.129:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 04:09:59.768: Couldn't delete ns: "nettest-5221": Delete "https://35.230.67.129/api/v1/namespaces/nettest-5221": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/nettest-5221", Err:(*net.OpError)(0xc00232a0f0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] Networking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:09:12.44 Nov 26 04:09:12.440: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename nettest 11/26/22 04:09:12.442 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:09:12.693 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:09:12.782 [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 [It] should update nodePort: udp [Slow] test/e2e/network/networking.go:394 STEP: Performing setup for networking test in namespace nettest-5221 11/26/22 04:09:12.88 STEP: creating a selector 11/26/22 04:09:12.88 STEP: Creating the service pods in kubernetes 11/26/22 04:09:12.88 Nov 26 04:09:12.880: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 04:09:13.321: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "nettest-5221" to be "running and ready" Nov 26 04:09:13.396: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 75.346139ms Nov 26 04:09:13.396: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 04:09:15.488: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167286939s Nov 26 04:09:15.488: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 04:09:17.439: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.118237104s Nov 26 04:09:17.439: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:19.438: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.117261052s Nov 26 04:09:19.438: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:21.437: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.116809894s Nov 26 04:09:21.438: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:23.442: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.121800226s Nov 26 04:09:23.442: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:25.437: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.116570509s Nov 26 04:09:25.437: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:27.437: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.116522638s Nov 26 04:09:27.437: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:29.439: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.117862079s Nov 26 04:09:29.439: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:31.438: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.117691425s Nov 26 04:09:31.438: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:33.437: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.116670116s Nov 26 04:09:33.437: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:35.437: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.116591905s Nov 26 04:09:35.437: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:37.438: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.117348379s Nov 26 04:09:37.438: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:39.438: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.117772909s Nov 26 04:09:39.438: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:41.438: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.117582162s Nov 26 04:09:41.438: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:43.441: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.119870246s Nov 26 04:09:43.441: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:45.442: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.121555328s Nov 26 04:09:45.442: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:47.438: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.117123774s Nov 26 04:09:47.438: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:49.439: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.117904738s Nov 26 04:09:49.439: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:51.437: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.116815775s Nov 26 04:09:51.438: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:53.439: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.118782159s Nov 26 04:09:53.439: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 04:09:55.437: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 42.11668126s Nov 26 04:09:55.437: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 26 04:09:55.437: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 26 04:09:55.483: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "nettest-5221" to be "running and ready" Nov 26 04:09:55.525: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 41.556467ms Nov 26 04:09:55.525: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 26 04:09:55.525: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 26 04:09:55.566: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "nettest-5221" to be "running and ready" Nov 26 04:09:55.607: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 41.142101ms Nov 26 04:09:55.607: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 04:09:57.648: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2.082769631s Nov 26 04:09:57.648: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 04:09:59.647: INFO: Encountered non-retryable error while getting pod nettest-5221/netserver-2: Get "https://35.230.67.129/api/v1/namespaces/nettest-5221/pods/netserver-2": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:09:59.647: INFO: Unexpected error: <*fmt.wrapError | 0xc0042ebc80>: { msg: "error while waiting for pod nettest-5221/netserver-2 to be running and ready: Get \"https://35.230.67.129/api/v1/namespaces/nettest-5221/pods/netserver-2\": dial tcp 35.230.67.129:443: connect: connection refused", err: <*url.Error | 0xc002648420>{ Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/nettest-5221/pods/netserver-2", Err: <*net.OpError | 0xc002062c80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0025da750>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0042ebc40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 26 04:09:59.647: FAIL: error while waiting for pod nettest-5221/netserver-2 to be running and ready: Get "https://35.230.67.129/api/v1/namespaces/nettest-5221/pods/netserver-2": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc004725b20, {0x75c6f7c, 0x9}, 0xc0021002a0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc004725b20, 0x7f0e6f67f0c0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc004725b20, 0x3e?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000bce5a0, {0xc0000caf20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.15() test/e2e/network/networking.go:395 +0x51 [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 Nov 26 04:09:59.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 04:09:59.687 STEP: Collecting events from namespace "nettest-5221". 11/26/22 04:09:59.687 Nov 26 04:09:59.727: INFO: Unexpected error: failed to list events in namespace "nettest-5221": <*url.Error | 0xc0025dacc0>: { Op: "Get", URL: "https://35.230.67.129/api/v1/namespaces/nettest-5221/events", Err: <*net.OpError | 0xc0016848c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0025dac90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 230, 67, 129], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0000dad00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 04:09:59.727: FAIL: failed to list events in namespace "nettest-5221": Get "https://35.230.67.129/api/v1/namespaces/nettest-5221/events": dial tcp 35.230.67.129:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0035025c0, {0xc001d856e0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc001569a00}, {0xc001d856e0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc003502650?, {0xc001d856e0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000bce5a0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0014db4b0?, 0xc00385ffb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc004303268?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0014db4b0?, 0x29449fc?}, {0xae73300?, 0xc00385ff80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 STEP: Destroying namespace "nettest-5221" for this suite. 11/26/22 04:09:59.728 Nov 26 04:09:59.768: FAIL: Couldn't delete ns: "nettest-5221": Delete "https://35.230.67.129/api/v1/namespaces/nettest-5221": dial tcp 35.230.67.129:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.230.67.129/api/v1/namespaces/nettest-5221", Err:(*net.OpError)(0xc00232a0f0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000bce5a0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0014db3f0?, 0xc0031ccfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0014db3f0?, 0x0?}, {0xae73300?, 0x5?, 0xc003bdff38?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sServices\sGCE\s\[Slow\]\sshould\sbe\sable\sto\screate\sand\stear\sdown\sa\sstandard\-tier\sload\sbalancer\s\[Slow\]$'
test/e2e/network/network_tiers.go:149 k8s.io/kubernetes/test/e2e/network.waitAndVerifyLBWithTier(0x66f1c00?, {0x0?, 0x69b9ca0?}, 0x7f9bf30?, 0x0?) test/e2e/network/network_tiers.go:149 +0x4f k8s.io/kubernetes/test/e2e/network.glob..func21.3() test/e2e/network/network_tiers.go:93 +0x287
[BeforeEach] [sig-network] Services GCE [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 04:13:33.904 Nov 26 04:13:33.904: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename services 11/26/22 04:13:33.905 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 04:14:47.05 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 04:14:47.138 [BeforeEach] [sig-network] Services GCE [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] Services GCE [Slow] test/e2e/network/network_tiers.go:49 [It] should be able to create and tear down a standard-tier load balancer [Slow] test/e2e/network/network_tiers.go:66 STEP: creating a pod to be part of the service net-tiers-svc 11/26/22 04:14:47.298 Nov 26 04:14:47.352: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 04:14:47.399: INFO: Found all 1 pods Nov 26 04:14:47.399: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [net-tiers-svc-kvvpb] Nov 26 04:14:47.399: INFO: Waiting up to 2m0s for pod "net-tiers-svc-kvvpb" in namespace "services-3788" to be "running and ready" Nov 26 04:14:47.440: INFO: Pod "net-tiers-svc-kvvpb": Phase="Pending", Reason="", readiness=false. Elapsed: 40.829027ms Nov 26 04:14:47.440: INFO: Error evaluating pod condition running and ready: want pod 'net-tiers-svc-kvvpb' on 'bootstrap-e2e-minion-group-qxpt' to be 'Running' but was 'Pending' Nov 26 04:14:49.482: INFO: Pod "net-tiers-svc-kvvpb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083006849s Nov 26 04:14:49.482: INFO: Error evaluating pod condition running and ready: want pod 'net-tiers-svc-kvvpb' on 'bootstrap-e2e-minion-group-qxpt' to be 'Running' but was 'Pending' Nov 26 04:14:51.504: INFO: Pod "net-tiers-svc-kvvpb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104531079s Nov 26 04:14:51.504: INFO: Error evaluating pod condition running and ready: want pod 'net-tiers-svc-kvvpb' on 'bootstrap-e2e-minion-group-qxpt' to be 'Running' but was 'Pending' Nov 26 04:14:53.628: INFO: Pod "net-tiers-svc-kvvpb": Phase="Running", Reason="", readiness=false. Elapsed: 6.22851881s Nov 26 04:14:53.628: INFO: Error evaluating pod condition running and ready: pod 'net-tiers-svc-kvvpb' on 'bootstrap-e2e-minion-group-qxpt' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:14:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:14:47 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:14:47 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 04:14:47 +0000 UTC }] Nov 26 04:14:55.532: INFO: Pod "net-tiers-svc-kvvpb": Phase="Running", Reason="", readiness=true. Elapsed: 8.13303061s Nov 26 04:14:55.532: INFO: Pod "net-tiers-svc-kvvpb" satisfied condition "running and ready" Nov 26 04:14:55.532: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [net-tiers-svc-kvvpb] STEP: creating a Service of type LoadBalancer using the standard network tier 11/26/22 04:14:55.532 Nov 26 04:14:55.712: INFO: Waiting up to 15m0s for service "net-tiers-svc" to get a new ingress IP Nov 26 04:16:33.876: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:35.815: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:37.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:39.815: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:41.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:43.815: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:45.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:47.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:49.815: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:51.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:53.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:55.815: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:57.815: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:16:59.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:01.815: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:03.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:05.815: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:07.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:09.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:11.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:13.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:15.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:17.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:19.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:21.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:23.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:25.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:27.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:29.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:31.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:33.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:35.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:37.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:39.815: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:41.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:43.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:45.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:47.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:49.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:51.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:53.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:55.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/services-3788/services/net-tiers-svc": dial tcp 35.230.67.129:443: connect: connection refused Nov 26 04:17:57.814: INFO: Retrying .... error trying to get Service net-tiers-svc: Get "https://35.230.67.129/api/v1/namespaces/se