go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sServers\swith\ssupport\sfor\sAPI\schunking\sshould\ssupport\scontinue\slisting\sfrom\sthe\slast\skey\sif\sthe\soriginal\sversion\shas\sbeen\scompacted\saway\,\sthough\sthe\slist\sis\sinconsistent\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001177770) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-api-machinery] Servers with support for API chunking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 07:32:45.525 Nov 26 07:32:45.525: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename chunking 11/26/22 07:32:45.527 Nov 26 07:32:45.566: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:47.607: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:49.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:51.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:53.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:55.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:57.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:59.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:01.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:03.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:05.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:07.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:09.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:11.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:13.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:15.606: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:15.645: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:15.645: INFO: Unexpected error: <*errors.errorString | 0xc0001fda30>: { s: "timed out waiting for the condition", } Nov 26 07:33:15.645: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001177770) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/node/init/init.go:32 Nov 26 07:33:15.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 07:33:15.685 [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\sjobs\swhen\ssuspended\s\[Slow\]\s\[Conformance\]$'
test/e2e/apps/cronjob.go:111 k8s.io/kubernetes/test/e2e/apps.glob..func2.2() test/e2e/apps/cronjob.go:111 +0x376 There were additional failures detected after the initial failure: [FAILED] Nov 26 07:32:44.135: failed to list events in namespace "cronjob-6932": Get "https://34.127.104.189/api/v1/namespaces/cronjob-6932/events": dial tcp 34.127.104.189:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 07:32:44.174: Couldn't delete ns: "cronjob-6932": Delete "https://34.127.104.189/api/v1/namespaces/cronjob-6932": dial tcp 34.127.104.189:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.127.104.189/api/v1/namespaces/cronjob-6932", Err:(*net.OpError)(0xc00445e7d0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 07:32:05.52 Nov 26 07:32:05.520: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/26/22 07:32:05.521 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 07:32:05.724 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 07:32:05.812 [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/apps/cronjob.go:96 STEP: Creating a suspended cronjob 11/26/22 07:32:05.904 STEP: Ensuring no jobs are scheduled 11/26/22 07:32:05.975 STEP: Ensuring no job exists by listing jobs explicitly 11/26/22 07:32:44.016 Nov 26 07:32:44.055: INFO: Unexpected error: Failed to list the CronJobs in namespace cronjob-6932: <*url.Error | 0xc004441a10>: { Op: "Get", URL: "https://34.127.104.189/apis/batch/v1/namespaces/cronjob-6932/jobs", Err: <*net.OpError | 0xc00445e2d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0044ba9c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 127, 104, 189], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004207f40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 07:32:44.055: FAIL: Failed to list the CronJobs in namespace cronjob-6932: Get "https://34.127.104.189/apis/batch/v1/namespaces/cronjob-6932/jobs": dial tcp 34.127.104.189:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func2.2() test/e2e/apps/cronjob.go:111 +0x376 [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 26 07:32:44.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 07:32:44.095 STEP: Collecting events from namespace "cronjob-6932". 11/26/22 07:32:44.095 Nov 26 07:32:44.135: INFO: Unexpected error: failed to list events in namespace "cronjob-6932": <*url.Error | 0xc0044ba9f0>: { Op: "Get", URL: "https://34.127.104.189/api/v1/namespaces/cronjob-6932/events", Err: <*net.OpError | 0xc003f71270>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0044703c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 127, 104, 189], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003c5ed80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 07:32:44.135: FAIL: failed to list events in namespace "cronjob-6932": Get "https://34.127.104.189/api/v1/namespaces/cronjob-6932/events": dial tcp 34.127.104.189:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc003f665c0, {0xc004137ec0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc00347a820}, {0xc004137ec0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc003f66650?, {0xc004137ec0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0009cb860) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc004144330?, 0xc004166fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0036e63c8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004144330?, 0x29449fc?}, {0xae73300?, 0xc004166f80?, 0x2a6d786?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-6932" for this suite. 11/26/22 07:32:44.135 Nov 26 07:32:44.174: FAIL: Couldn't delete ns: "cronjob-6932": Delete "https://34.127.104.189/api/v1/namespaces/cronjob-6932": dial tcp 34.127.104.189:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.127.104.189/api/v1/namespaces/cronjob-6932", Err:(*net.OpError)(0xc00445e7d0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0009cb860) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0041442b0?, 0xc003840fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0041442b0?, 0x0?}, {0xae73300?, 0x5?, 0xc003a79f38?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\ssupport\sInClusterConfig\swith\stoken\srotation\s\[Slow\]$'
test/e2e/auth/service_accounts.go:520 k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9abfrom junit_01.xml
[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 07:32:03.85 Nov 26 07:32:03.850: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename svcaccounts 11/26/22 07:32:03.852 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 07:32:04.027 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 07:32:04.128 [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 [It] should support InClusterConfig with token rotation [Slow] test/e2e/auth/service_accounts.go:432 Nov 26 07:32:04.307: INFO: created pod Nov 26 07:32:04.307: INFO: Waiting up to 1m0s for 1 pods to be running and ready: [inclusterclient] Nov 26 07:32:04.307: INFO: Waiting up to 1m0s for pod "inclusterclient" in namespace "svcaccounts-9163" to be "running and ready" Nov 26 07:32:04.371: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 64.199909ms Nov 26 07:32:04.371: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-svrn' to be 'Running' but was 'Pending' Nov 26 07:32:06.428: INFO: Pod "inclusterclient": Phase="Running", Reason="", readiness=true. Elapsed: 2.121398359s Nov 26 07:32:06.428: INFO: Pod "inclusterclient" satisfied condition "running and ready" Nov 26 07:32:06.428: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [inclusterclient] Nov 26 07:32:06.428: INFO: pod is ready Nov 26 07:33:06.428: INFO: polling logs Nov 26 07:33:06.468: INFO: Error pulling logs: Get "https://34.127.104.189/api/v1/namespaces/svcaccounts-9163/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:34:06.429: INFO: polling logs Nov 26 07:34:06.558: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) Nov 26 07:35:06.429: INFO: polling logs Nov 26 07:35:06.476: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) Nov 26 07:36:06.429: INFO: polling logs Nov 26 07:36:06.474: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m0.371s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m0.001s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 2405 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00122a3a8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001ebde08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc000899b00, 0xc000d87800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:37:06.429: INFO: polling logs Nov 26 07:37:06.891: FAIL: Unexpected error: inclusterclient reported an error: saw status=failed I1126 07:32:05.440463 1 main.go:61] started I1126 07:32:35.444854 1 main.go:79] calling /healthz I1126 07:32:35.445146 1 main.go:96] authz_header=LI5Td_w4OAMwv-XSNutggvZsymyh9p2tn-My9_jdVWA I1126 07:33:05.445518 1 main.go:79] calling /healthz I1126 07:33:05.445794 1 main.go:96] authz_header=LI5Td_w4OAMwv-XSNutggvZsymyh9p2tn-My9_jdVWA E1126 07:33:05.446639 1 main.go:82] status=failed E1126 07:33:05.446656 1 main.go:83] error checking /healthz: Get "https://10.0.0.1:443/healthz": dial tcp 10.0.0.1:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9ab [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 Nov 26 07:37:06.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 07:37:07.048 STEP: Collecting events from namespace "svcaccounts-9163". 11/26/22 07:37:07.048 STEP: Found 5 events. 11/26/22 07:37:07.095 Nov 26 07:37:07.095: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for inclusterclient: { } Scheduled: Successfully assigned svcaccounts-9163/inclusterclient to bootstrap-e2e-minion-group-svrn Nov 26 07:37:07.095: INFO: At 2022-11-26 07:32:05 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-svrn} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 07:37:07.095: INFO: At 2022-11-26 07:32:05 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-svrn} Created: Created container inclusterclient Nov 26 07:37:07.095: INFO: At 2022-11-26 07:32:05 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-svrn} Started: Started container inclusterclient Nov 26 07:37:07.095: INFO: At 2022-11-26 07:33:33 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-svrn} Killing: Stopping container inclusterclient Nov 26 07:37:07.140: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 07:37:07.140: INFO: inclusterclient bootstrap-e2e-minion-group-svrn Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:33:34 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:33:34 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:04 +0000 UTC }] Nov 26 07:37:07.140: INFO: Nov 26 07:37:07.323: INFO: Logging node info for node bootstrap-e2e-master Nov 26 07:37:07.365: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master f12dfba9-8340-4384-a012-464bb8ff014b 11147 0 2022-11-26 07:14:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 07:35:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:35:06 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:35:06 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:35:06 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:35:06 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.127.104.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4341b6df721ee06de14317c6e64c7913,SystemUUID:4341b6df-721e-e06d-e143-17c6e64c7913,BootID:0fd660c7-349c-4c78-8001-012f07790551,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 07:37:07.366: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 07:37:07.412: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 07:37:07.464: INFO: metadata-proxy-v0.1-f9lfz started at 2022-11-26 07:14:27 +0000 UTC (0+2 container statuses recorded) Nov 26 07:37:07.464: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 07:37:07.465: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 07:37:07.465: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 07:13:37 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:07.465: INFO: Container etcd-container ready: true, restart count 2 Nov 26 07:37:07.465: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 07:13:37 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:07.465: INFO: Container etcd-container ready: true, restart count 2 Nov 26 07:37:07.465: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 07:13:37 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:07.465: INFO: Container kube-scheduler ready: true, restart count 3 Nov 26 07:37:07.465: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 07:13:53 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:07.465: INFO: Container l7-lb-controller ready: false, restart count 7 Nov 26 07:37:07.465: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 07:13:37 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:07.465: INFO: Container konnectivity-server-container ready: true, restart count 2 Nov 26 07:37:07.465: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 07:13:37 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:07.465: INFO: Container kube-apiserver ready: true, restart count 2 Nov 26 07:37:07.465: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 07:13:53 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:07.465: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 26 07:37:07.465: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 07:13:56 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:07.465: INFO: Container kube-controller-manager ready: true, restart count 6 Nov 26 07:37:07.744: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 07:37:07.744: INFO: Logging node info for node bootstrap-e2e-minion-group-svrn Nov 26 07:37:07.823: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-svrn 0b46f31f-d25c-4604-ba86-b3e98c09449d 11941 0 2022-11-26 07:14:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-svrn kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-svrn topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-9402":"bootstrap-e2e-minion-group-svrn","csi-hostpath-provisioning-9550":"bootstrap-e2e-minion-group-svrn","csi-mock-csi-mock-volumes-5988":"bootstrap-e2e-minion-group-svrn"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 07:34:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 07:36:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 07:36:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-svrn,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:34:36 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:34:36 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:34:36 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:34:36 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:34:36 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:34:36 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:34:36 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:36:53 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:36:53 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:36:53 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:36:53 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.23.98,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-svrn.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-svrn.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a792d55bc5ad5cdad144cb5b4dfa29f,SystemUUID:6a792d55-bc5a-d5cd-ad14-4cb5b4dfa29f,BootID:d19434b3-94eb-452d-a279-fc84362b7cab,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8372^ab1f7fc6-6d5c-11ed-96c7-c2ddb80fc067 kubernetes.io/csi/csi-mock-csi-mock-volumes-5988^133ed1f7-6d5d-11ed-8921-d2d874b08a41],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-8372^ab1f7fc6-6d5c-11ed-96c7-c2ddb80fc067,DevicePath:,},},Config:nil,},} Nov 26 07:37:07.823: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-svrn Nov 26 07:37:07.886: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-svrn Nov 26 07:37:08.103: INFO: l7-default-backend-8549d69d99-fz66r started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 07:37:08.103: INFO: affinity-lb-esipp-transition-cqc64 started at 2022-11-26 07:36:19 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container affinity-lb-esipp-transition ready: false, restart count 2 Nov 26 07:37:08.103: INFO: pod-f100ce75-6c55-411a-a818-0739167e4865 started at 2022-11-26 07:32:41 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container write-pod ready: false, restart count 0 Nov 26 07:37:08.103: INFO: konnectivity-agent-59kfk started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container konnectivity-agent ready: false, restart count 7 Nov 26 07:37:08.103: INFO: pod-e251ec22-6288-4cf2-a290-a063e3c72c06 started at 2022-11-26 07:17:00 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container write-pod ready: false, restart count 0 Nov 26 07:37:08.103: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:29:21 +0000 UTC (0+7 container statuses recorded) Nov 26 07:37:08.103: INFO: Container csi-attacher ready: true, restart count 1 Nov 26 07:37:08.103: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 07:37:08.103: INFO: Container csi-resizer ready: true, restart count 1 Nov 26 07:37:08.103: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 26 07:37:08.103: INFO: Container hostpath ready: true, restart count 1 Nov 26 07:37:08.103: INFO: Container liveness-probe ready: true, restart count 1 Nov 26 07:37:08.103: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 26 07:37:08.103: INFO: metadata-proxy-v0.1-hbvvs started at 2022-11-26 07:14:31 +0000 UTC (0+2 container statuses recorded) Nov 26 07:37:08.103: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 07:37:08.103: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 07:37:08.103: INFO: back-off-cap started at 2022-11-26 07:28:02 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container back-off-cap ready: false, restart count 6 Nov 26 07:37:08.103: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:32:07 +0000 UTC (0+7 container statuses recorded) Nov 26 07:37:08.103: INFO: Container csi-attacher ready: false, restart count 5 Nov 26 07:37:08.103: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 07:37:08.103: INFO: Container csi-resizer ready: false, restart count 5 Nov 26 07:37:08.103: INFO: Container csi-snapshotter ready: false, restart count 5 Nov 26 07:37:08.103: INFO: Container hostpath ready: false, restart count 5 Nov 26 07:37:08.103: INFO: Container liveness-probe ready: false, restart count 5 Nov 26 07:37:08.103: INFO: Container node-driver-registrar ready: false, restart count 5 Nov 26 07:37:08.103: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:32:32 +0000 UTC (0+7 container statuses recorded) Nov 26 07:37:08.103: INFO: Container csi-attacher ready: false, restart count 4 Nov 26 07:37:08.103: INFO: Container csi-provisioner ready: false, restart count 4 Nov 26 07:37:08.103: INFO: Container csi-resizer ready: false, restart count 4 Nov 26 07:37:08.103: INFO: Container csi-snapshotter ready: false, restart count 4 Nov 26 07:37:08.103: INFO: Container hostpath ready: false, restart count 4 Nov 26 07:37:08.103: INFO: Container liveness-probe ready: false, restart count 4 Nov 26 07:37:08.103: INFO: Container node-driver-registrar ready: false, restart count 4 Nov 26 07:37:08.103: INFO: csi-mockplugin-0 started at 2022-11-26 07:36:34 +0000 UTC (0+3 container statuses recorded) Nov 26 07:37:08.103: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 07:37:08.103: INFO: Container driver-registrar ready: true, restart count 1 Nov 26 07:37:08.103: INFO: Container mock ready: true, restart count 1 Nov 26 07:37:08.103: INFO: inclusterclient started at 2022-11-26 07:32:04 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container inclusterclient ready: false, restart count 0 Nov 26 07:37:08.103: INFO: kube-proxy-bootstrap-e2e-minion-group-svrn started at 2022-11-26 07:14:30 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container kube-proxy ready: true, restart count 7 Nov 26 07:37:08.103: INFO: coredns-6d97d5ddb-znrwb started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container coredns ready: false, restart count 8 Nov 26 07:37:08.103: INFO: volume-snapshot-controller-0 started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container volume-snapshot-controller ready: false, restart count 7 Nov 26 07:37:08.103: INFO: pod-d8ff177b-2854-4a50-bb22-0b48cc6c799f started at 2022-11-26 07:32:37 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container write-pod ready: false, restart count 0 Nov 26 07:37:08.103: INFO: hostexec-bootstrap-e2e-minion-group-svrn-l4bw2 started at 2022-11-26 07:36:24 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 07:37:08.103: INFO: pod-subpath-test-inlinevolume-zshr started at 2022-11-26 07:17:36 +0000 UTC (1+2 container statuses recorded) Nov 26 07:37:08.103: INFO: Init container init-volume-inlinevolume-zshr ready: true, restart count 5 Nov 26 07:37:08.103: INFO: Container test-container-subpath-inlinevolume-zshr ready: false, restart count 6 Nov 26 07:37:08.103: INFO: Container test-container-volume-inlinevolume-zshr ready: false, restart count 6 Nov 26 07:37:08.103: INFO: kube-dns-autoscaler-5f6455f985-4pppz started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container autoscaler ready: true, restart count 7 Nov 26 07:37:08.103: INFO: hostexec-bootstrap-e2e-minion-group-svrn-ndxqc started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container agnhost-container ready: false, restart count 5 Nov 26 07:37:08.103: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:29:08 +0000 UTC (0+7 container statuses recorded) Nov 26 07:37:08.103: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 07:37:08.103: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 07:37:08.103: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 07:37:08.103: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 07:37:08.103: INFO: Container hostpath ready: true, restart count 4 Nov 26 07:37:08.103: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 07:37:08.103: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 07:37:08.103: INFO: pod-subpath-test-dynamicpv-d4hz started at 2022-11-26 07:36:17 +0000 UTC (1+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Init container init-volume-dynamicpv-d4hz ready: false, restart count 0 Nov 26 07:37:08.103: INFO: Container test-container-subpath-dynamicpv-d4hz ready: false, restart count 0 Nov 26 07:37:08.103: INFO: pvc-volume-tester-hdf97 started at 2022-11-26 07:36:46 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.103: INFO: Container volume-tester ready: false, restart count 0 Nov 26 07:37:08.390: INFO: Latency metrics for node bootstrap-e2e-minion-group-svrn Nov 26 07:37:08.390: INFO: Logging node info for node bootstrap-e2e-minion-group-v6kp Nov 26 07:37:08.437: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-v6kp 1b4c00d7-9f80-4c8f-bcb4-5fdf079da6d6 11981 0 2022-11-26 07:14:26 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-v6kp kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-v6kp topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8553":"bootstrap-e2e-minion-group-v6kp","csi-hostpath-multivolume-8709":"bootstrap-e2e-minion-group-v6kp","csi-mock-csi-mock-volumes-4257":"bootstrap-e2e-minion-group-v6kp"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 07:34:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 07:36:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 07:36:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-v6kp,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:34:32 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:34:32 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:34:32 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:34:32 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:34:32 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:34:32 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:34:32 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:36:58 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:36:58 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:36:58 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:36:58 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.227.156.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-v6kp.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-v6kp.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:35b699b12f5019228f1e2e38d963976d,SystemUUID:35b699b1-2f50-1922-8f1e-2e38d963976d,BootID:5793a9ad-d1f5-4512-925a-2b321cb699ee,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-8553^6418808c-6d5c-11ed-83de-86d9cddca60a kubernetes.io/csi/csi-hostpath-provisioning-2652^5b9d621e-6d5a-11ed-bfab-ae8588c81627 kubernetes.io/csi/csi-hostpath-provisioning-4171^1732c9d0-6d5c-11ed-b59c-a2ff331b1a4f],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2652^5b9d621e-6d5a-11ed-bfab-ae8588c81627,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8553^6418808c-6d5c-11ed-83de-86d9cddca60a,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4171^1732c9d0-6d5c-11ed-b59c-a2ff331b1a4f,DevicePath:,},},Config:nil,},} Nov 26 07:37:08.437: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-v6kp Nov 26 07:37:08.483: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-v6kp Nov 26 07:37:08.626: INFO: csi-mockplugin-0 started at 2022-11-26 07:26:48 +0000 UTC (0+3 container statuses recorded) Nov 26 07:37:08.626: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 07:37:08.626: INFO: Container driver-registrar ready: true, restart count 4 Nov 26 07:37:08.626: INFO: Container mock ready: true, restart count 4 Nov 26 07:37:08.626: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:29:38 +0000 UTC (0+7 container statuses recorded) Nov 26 07:37:08.626: INFO: Container csi-attacher ready: false, restart count 6 Nov 26 07:37:08.626: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 07:37:08.626: INFO: Container csi-resizer ready: false, restart count 6 Nov 26 07:37:08.626: INFO: Container csi-snapshotter ready: false, restart count 6 Nov 26 07:37:08.626: INFO: Container hostpath ready: false, restart count 6 Nov 26 07:37:08.626: INFO: Container liveness-probe ready: false, restart count 6 Nov 26 07:37:08.626: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 26 07:37:08.626: INFO: pod-afc70214-dd83-49f5-b22a-c874fc6e5577 started at 2022-11-26 07:31:50 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container write-pod ready: false, restart count 0 Nov 26 07:37:08.626: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-hjsww started at 2022-11-26 07:17:36 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 07:37:08.626: INFO: affinity-lb-esipp-transition-g9mct started at 2022-11-26 07:36:19 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container affinity-lb-esipp-transition ready: true, restart count 0 Nov 26 07:37:08.626: INFO: pod-subpath-test-preprovisionedpv-bcww started at 2022-11-26 07:17:00 +0000 UTC (1+2 container statuses recorded) Nov 26 07:37:08.626: INFO: Init container init-volume-preprovisionedpv-bcww ready: true, restart count 6 Nov 26 07:37:08.626: INFO: Container test-container-subpath-preprovisionedpv-bcww ready: false, restart count 5 Nov 26 07:37:08.626: INFO: Container test-container-volume-preprovisionedpv-bcww ready: false, restart count 5 Nov 26 07:37:08.626: INFO: hostpathsymlink-io-client started at 2022-11-26 07:17:30 +0000 UTC (1+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Init container hostpathsymlink-io-init ready: true, restart count 0 Nov 26 07:37:08.626: INFO: Container hostpathsymlink-io-client ready: false, restart count 0 Nov 26 07:37:08.626: INFO: kube-proxy-bootstrap-e2e-minion-group-v6kp started at 2022-11-26 07:14:26 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container kube-proxy ready: true, restart count 8 Nov 26 07:37:08.626: INFO: pod-subpath-test-dynamicpv-sbdn started at 2022-11-26 07:17:17 +0000 UTC (1+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Init container init-volume-dynamicpv-sbdn ready: true, restart count 0 Nov 26 07:37:08.626: INFO: Container test-container-subpath-dynamicpv-sbdn ready: false, restart count 0 Nov 26 07:37:08.626: INFO: volume-prep-provisioning-6967 started at 2022-11-26 07:17:31 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container init-volume-provisioning-6967 ready: false, restart count 0 Nov 26 07:37:08.626: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:31:44 +0000 UTC (0+7 container statuses recorded) Nov 26 07:37:08.626: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 07:37:08.626: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 07:37:08.626: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 07:37:08.626: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 07:37:08.626: INFO: Container hostpath ready: true, restart count 4 Nov 26 07:37:08.626: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 07:37:08.626: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 07:37:08.626: INFO: konnectivity-agent-psnzt started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container konnectivity-agent ready: true, restart count 7 Nov 26 07:37:08.626: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-hrstr started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 07:37:08.626: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:32:39 +0000 UTC (0+7 container statuses recorded) Nov 26 07:37:08.626: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 07:37:08.626: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 07:37:08.626: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 07:37:08.626: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 07:37:08.626: INFO: Container hostpath ready: true, restart count 0 Nov 26 07:37:08.626: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 07:37:08.626: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 07:37:08.626: INFO: metadata-proxy-v0.1-7k4s6 started at 2022-11-26 07:14:27 +0000 UTC (0+2 container statuses recorded) Nov 26 07:37:08.626: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 07:37:08.626: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 07:37:08.626: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-lfftx started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 07:37:08.626: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-bkzbv started at 2022-11-26 07:17:32 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 07:37:08.626: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 07:26:48 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 07:37:08.626: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-dqt4r started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 07:37:08.626: INFO: pod-configmaps-601d851b-9baa-4ba4-939b-2d8ceb3ae50c started at 2022-11-26 07:29:25 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 07:37:08.626: INFO: pod-subpath-test-dynamicpv-z58q started at 2022-11-26 07:31:41 +0000 UTC (1+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Init container init-volume-dynamicpv-z58q ready: false, restart count 0 Nov 26 07:37:08.626: INFO: Container test-container-subpath-dynamicpv-z58q ready: false, restart count 0 Nov 26 07:37:08.626: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-dq8fq started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 07:37:08.626: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-w7jkx started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container agnhost-container ready: false, restart count 6 Nov 26 07:37:08.626: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-4dj2d started at 2022-11-26 07:17:10 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container agnhost-container ready: false, restart count 6 Nov 26 07:37:08.626: INFO: pod-subpath-test-preprovisionedpv-5228 started at 2022-11-26 07:17:15 +0000 UTC (1+2 container statuses recorded) Nov 26 07:37:08.626: INFO: Init container init-volume-preprovisionedpv-5228 ready: true, restart count 1 Nov 26 07:37:08.626: INFO: Container test-container-subpath-preprovisionedpv-5228 ready: false, restart count 5 Nov 26 07:37:08.626: INFO: Container test-container-volume-preprovisionedpv-5228 ready: false, restart count 5 Nov 26 07:37:08.626: INFO: coredns-6d97d5ddb-k477c started at 2022-11-26 07:14:49 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:08.626: INFO: Container coredns ready: false, restart count 8 Nov 26 07:37:09.033: INFO: Latency metrics for node bootstrap-e2e-minion-group-v6kp Nov 26 07:37:09.033: INFO: Logging node info for node bootstrap-e2e-minion-group-zhjw Nov 26 07:37:09.083: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-zhjw 02d1b2e8-572a-4705-ba12-2a030476f45b 12065 0 2022-11-26 07:14:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-zhjw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-zhjw topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-489":"bootstrap-e2e-minion-group-zhjw","csi-mock-csi-mock-volumes-1907":"bootstrap-e2e-minion-group-zhjw","csi-mock-csi-mock-volumes-9498":"bootstrap-e2e-minion-group-zhjw"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 07:34:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 07:37:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 07:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-zhjw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:34:34 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:34:34 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:34:34 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:34:34 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:34:34 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:34:34 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:34:34 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:37:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:37:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:37:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:37:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.105.36.0,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-zhjw.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-zhjw.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cc67b7d9c606cf13b518cf0cb8b22fe6,SystemUUID:cc67b7d9-c606-cf13-b518-cf0cb8b22fe6,BootID:a06198bc-32f7-4d08-b37d-b3aaad431e87,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 07:37:09.084: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-zhjw Nov 26 07:37:09.148: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-zhjw Nov 26 07:37:09.225: INFO: affinity-lb-esipp-transition-ld6v8 started at 2022-11-26 07:36:19 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.225: INFO: Container affinity-lb-esipp-transition ready: true, restart count 2 Nov 26 07:37:09.225: INFO: test-hostpath-type-lxw9d started at 2022-11-26 07:36:21 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.225: INFO: Container host-path-sh-testing ready: false, restart count 0 Nov 26 07:37:09.225: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:36:36 +0000 UTC (0+7 container statuses recorded) Nov 26 07:37:09.225: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 07:37:09.225: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 07:37:09.225: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 07:37:09.225: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 07:37:09.225: INFO: Container hostpath ready: true, restart count 0 Nov 26 07:37:09.225: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 07:37:09.225: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 07:37:09.225: INFO: test-hostpath-type-vtzzz started at 2022-11-26 07:36:36 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.225: INFO: Container host-path-testing ready: true, restart count 0 Nov 26 07:37:09.225: INFO: metrics-server-v0.5.2-867b8754b9-72b8p started at 2022-11-26 07:15:04 +0000 UTC (0+2 container statuses recorded) Nov 26 07:37:09.225: INFO: Container metrics-server ready: false, restart count 8 Nov 26 07:37:09.225: INFO: Container metrics-server-nanny ready: false, restart count 8 Nov 26 07:37:09.225: INFO: test-hostpath-type-s64hc started at 2022-11-26 07:36:16 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.225: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 07:37:09.225: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-mmn75 started at 2022-11-26 07:36:17 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.225: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 07:37:09.225: INFO: csi-mockplugin-0 started at 2022-11-26 07:25:55 +0000 UTC (0+3 container statuses recorded) Nov 26 07:37:09.225: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 07:37:09.225: INFO: Container driver-registrar ready: true, restart count 3 Nov 26 07:37:09.225: INFO: Container mock ready: true, restart count 3 Nov 26 07:37:09.225: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 07:25:55 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.225: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 07:37:09.225: INFO: pod-configmaps-3dd17a2e-0a49-47d7-a918-4415d8ce4938 started at 2022-11-26 07:36:16 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.225: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 07:37:09.225: INFO: test-hostpath-type-fhc9h started at 2022-11-26 07:36:24 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.225: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 07:37:09.225: INFO: lb-internal-d7rbp started at <nil> (0+0 container statuses recorded) Nov 26 07:37:09.225: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-45fbb started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.225: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 07:37:09.225: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-jj84b started at 2022-11-26 07:17:14 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.225: INFO: Container agnhost-container ready: false, restart count 5 Nov 26 07:37:09.225: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-xd7km started at 2022-11-26 07:17:25 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.225: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 07:37:09.225: INFO: pod-subpath-test-preprovisionedpv-829z started at 2022-11-26 07:17:31 +0000 UTC (1+2 container statuses recorded) Nov 26 07:37:09.225: INFO: Init container init-volume-preprovisionedpv-829z ready: true, restart count 4 Nov 26 07:37:09.225: INFO: Container test-container-subpath-preprovisionedpv-829z ready: false, restart count 5 Nov 26 07:37:09.225: INFO: Container test-container-volume-preprovisionedpv-829z ready: false, restart count 5 Nov 26 07:37:09.226: INFO: pod-back-off-image started at 2022-11-26 07:36:24 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container back-off ready: true, restart count 2 Nov 26 07:37:09.226: INFO: ss-0 started at 2022-11-26 07:36:24 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container webserver ready: true, restart count 1 Nov 26 07:37:09.226: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-qtnr9 started at 2022-11-26 07:17:36 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container agnhost-container ready: false, restart count 5 Nov 26 07:37:09.226: INFO: test-hostpath-type-fw4qr started at 2022-11-26 07:36:28 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container host-path-sh-testing ready: true, restart count 0 Nov 26 07:37:09.226: INFO: csi-mockplugin-0 started at 2022-11-26 07:17:10 +0000 UTC (0+3 container statuses recorded) Nov 26 07:37:09.226: INFO: Container csi-provisioner ready: true, restart count 6 Nov 26 07:37:09.226: INFO: Container driver-registrar ready: true, restart count 6 Nov 26 07:37:09.226: INFO: Container mock ready: true, restart count 6 Nov 26 07:37:09.226: INFO: pod-16452104-42be-4e22-9ea5-25ee39d95a22 started at 2022-11-26 07:17:33 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container write-pod ready: false, restart count 0 Nov 26 07:37:09.226: INFO: external-provisioner-2kwtt started at 2022-11-26 07:36:25 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 26 07:37:09.226: INFO: konnectivity-agent-zm9hn started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container konnectivity-agent ready: false, restart count 7 Nov 26 07:37:09.226: INFO: pod-subpath-test-preprovisionedpv-kvq4 started at 2022-11-26 07:17:31 +0000 UTC (1+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Init container init-volume-preprovisionedpv-kvq4 ready: true, restart count 0 Nov 26 07:37:09.226: INFO: Container test-container-subpath-preprovisionedpv-kvq4 ready: false, restart count 0 Nov 26 07:37:09.226: INFO: test-hostpath-type-xlmfm started at 2022-11-26 07:36:19 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container host-path-testing ready: true, restart count 0 Nov 26 07:37:09.226: INFO: test-hostpath-type-jg2wg started at 2022-11-26 07:36:46 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 07:37:09.226: INFO: kube-proxy-bootstrap-e2e-minion-group-zhjw started at 2022-11-26 07:14:28 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container kube-proxy ready: false, restart count 7 Nov 26 07:37:09.226: INFO: pod-subpath-test-inlinevolume-7tw8 started at 2022-11-26 07:17:28 +0000 UTC (1+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Init container init-volume-inlinevolume-7tw8 ready: true, restart count 0 Nov 26 07:37:09.226: INFO: Container test-container-subpath-inlinevolume-7tw8 ready: false, restart count 0 Nov 26 07:37:09.226: INFO: pod-subpath-test-preprovisionedpv-62rx started at 2022-11-26 07:17:15 +0000 UTC (1+2 container statuses recorded) Nov 26 07:37:09.226: INFO: Init container init-volume-preprovisionedpv-62rx ready: true, restart count 6 Nov 26 07:37:09.226: INFO: Container test-container-subpath-preprovisionedpv-62rx ready: false, restart count 7 Nov 26 07:37:09.226: INFO: Container test-container-volume-preprovisionedpv-62rx ready: false, restart count 7 Nov 26 07:37:09.226: INFO: csi-mockplugin-0 started at 2022-11-26 07:27:24 +0000 UTC (0+4 container statuses recorded) Nov 26 07:37:09.226: INFO: Container busybox ready: false, restart count 3 Nov 26 07:37:09.226: INFO: Container csi-provisioner ready: false, restart count 4 Nov 26 07:37:09.226: INFO: Container driver-registrar ready: false, restart count 3 Nov 26 07:37:09.226: INFO: Container mock ready: false, restart count 3 Nov 26 07:37:09.226: INFO: pod-5a31e133-2897-4536-b4f3-5df6ba103b38 started at 2022-11-26 07:36:24 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container write-pod ready: false, restart count 0 Nov 26 07:37:09.226: INFO: httpd started at 2022-11-26 07:36:54 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container httpd ready: false, restart count 1 Nov 26 07:37:09.226: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-jnb62 started at 2022-11-26 07:17:24 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container agnhost-container ready: false, restart count 7 Nov 26 07:37:09.226: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-tk6j2 started at 2022-11-26 07:17:18 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container agnhost-container ready: false, restart count 6 Nov 26 07:37:09.226: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-g6bbz started at 2022-11-26 07:36:21 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 07:37:09.226: INFO: test-hostpath-type-425rz started at 2022-11-26 07:36:41 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 07:37:09.226: INFO: hostpath-symlink-prep-provisioning-1595 started at <nil> (0+0 container statuses recorded) Nov 26 07:37:09.226: INFO: pod-subpath-test-inlinevolume-9wjg started at 2022-11-26 07:37:03 +0000 UTC (1+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Init container init-volume-inlinevolume-9wjg ready: false, restart count 0 Nov 26 07:37:09.226: INFO: Container test-container-subpath-inlinevolume-9wjg ready: false, restart count 0 Nov 26 07:37:09.226: INFO: metadata-proxy-v0.1-vzmrj started at 2022-11-26 07:14:29 +0000 UTC (0+2 container statuses recorded) Nov 26 07:37:09.226: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 07:37:09.226: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 07:37:09.226: INFO: httpd started at 2022-11-26 07:32:14 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container httpd ready: false, restart count 3 Nov 26 07:37:09.226: INFO: external-provisioner-mpk26 started at 2022-11-26 07:36:16 +0000 UTC (0+1 container statuses recorded) Nov 26 07:37:09.226: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 26 07:37:09.716: INFO: Latency metrics for node bootstrap-e2e-minion-group-zhjw [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 STEP: Destroying namespace "svcaccounts-9163" for this suite. 11/26/22 07:37:09.716
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swith\s\-\-leave\-stdin\-open$'
test/e2e/kubectl/kubectl.go:415 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 07:23:08.965 Nov 26 07:23:08.965: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 07:23:08.967 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 07:24:12.221 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 07:24:12.316 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/26/22 07:24:12.407 Nov 26 07:24:12.408: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=kubectl-9054 create -f -' Nov 26 07:24:12.994: INFO: stderr: "" Nov 26 07:24:12.994: INFO: stdout: "pod/httpd created\n" Nov 26 07:24:12.994: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 26 07:24:12.994: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-9054" to be "running and ready" Nov 26 07:24:13.041: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 47.58037ms Nov 26 07:24:13.042: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' to be 'Running' but was 'Pending' Nov 26 07:24:15.087: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.093366171s Nov 26 07:24:15.087: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:17.107: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.112860894s Nov 26 07:24:17.107: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:19.119: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.125185413s Nov 26 07:24:19.119: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:21.098: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.103911405s Nov 26 07:24:21.098: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:23.132: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.137732065s Nov 26 07:24:23.132: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:25.108: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.114403502s Nov 26 07:24:25.108: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:27.114: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.120449971s Nov 26 07:24:27.114: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:29.128: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.134337845s Nov 26 07:24:29.128: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:31.187: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.19345345s Nov 26 07:24:31.187: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:33.154: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.159595877s Nov 26 07:24:33.154: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:35.123: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.128642307s Nov 26 07:24:35.123: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:37.165: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.170879363s Nov 26 07:24:37.165: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:39.210: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.216438725s Nov 26 07:24:39.210: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:41.142: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 28.148536281s Nov 26 07:24:41.143: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:43.119: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 30.125032097s Nov 26 07:24:43.119: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:45.120: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 32.126278907s Nov 26 07:24:45.120: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:47.194: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 34.20015198s Nov 26 07:24:47.194: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:49.311: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 36.317053704s Nov 26 07:24:49.311: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:51.109: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 38.115229541s Nov 26 07:24:51.109: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:53.196: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 40.202356131s Nov 26 07:24:53.196: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:55.108: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 42.114344043s Nov 26 07:24:55.108: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:57.112: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 44.118277646s Nov 26 07:24:57.112: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:24:59.144: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 46.150217795s Nov 26 07:24:59.144: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:01.128: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 48.134550259s Nov 26 07:25:01.129: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:03.272: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 50.278568557s Nov 26 07:25:03.273: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:05.116: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 52.122505037s Nov 26 07:25:05.116: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:07.118: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 54.123589252s Nov 26 07:25:07.118: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:09.128: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 56.134444023s Nov 26 07:25:09.128: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:11.153: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 58.159454272s Nov 26 07:25:11.153: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:13.111: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.11752967s Nov 26 07:25:13.112: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:15.144: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.149685848s Nov 26 07:25:15.144: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:17.117: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.123415827s Nov 26 07:25:17.117: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:19.392: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.397884434s Nov 26 07:25:19.392: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:21.105: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.111060379s Nov 26 07:25:21.105: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:23.140: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.146393391s Nov 26 07:25:23.140: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:25.131: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.137209206s Nov 26 07:25:25.131: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:27.111: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.116952691s Nov 26 07:25:27.111: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:29.178: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.184491824s Nov 26 07:25:29.178: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:31.116: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.122306648s Nov 26 07:25:31.116: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:33.170: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.175711524s Nov 26 07:25:33.170: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:35.160: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.166302249s Nov 26 07:25:35.160: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:37.188: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.194091306s Nov 26 07:25:37.188: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:39.154: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.160322424s Nov 26 07:25:39.154: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:41.104: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.109823379s Nov 26 07:25:41.104: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:43.138: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.143879727s Nov 26 07:25:43.138: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:45.138: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.144245216s Nov 26 07:25:45.138: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:47.178: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.183835306s Nov 26 07:25:47.178: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:49.264: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.270293834s Nov 26 07:25:49.264: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:51.115: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.12111414s Nov 26 07:25:51.115: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:53.161: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.166880579s Nov 26 07:25:53.161: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:55.157: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.163288191s Nov 26 07:25:55.157: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:57.122: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.128331519s Nov 26 07:25:57.122: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:25:59.156: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.162242738s Nov 26 07:25:59.156: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:01.203: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.209388837s Nov 26 07:26:01.203: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:03.151: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.157317516s Nov 26 07:26:03.151: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:05.125: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.131235381s Nov 26 07:26:05.125: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:07.100: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.106343652s Nov 26 07:26:07.100: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:09.146: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.151588956s Nov 26 07:26:09.146: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:11.109: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.114701214s Nov 26 07:26:11.109: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:13.179: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.185574329s Nov 26 07:26:13.180: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:15.243: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.249442755s Nov 26 07:26:15.243: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:17.119: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.12533404s Nov 26 07:26:17.119: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:19.169: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.174708546s Nov 26 07:26:19.169: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:21.125: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.131098967s Nov 26 07:26:21.125: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:23.161: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.167557403s Nov 26 07:26:23.162: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:25.120: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.126154324s Nov 26 07:26:25.120: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:27.153: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.158853136s Nov 26 07:26:27.153: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:29.287: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.293233693s Nov 26 07:26:29.287: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:31.145: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.151350191s Nov 26 07:26:31.145: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:33.226: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.232062169s Nov 26 07:26:33.226: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:35.122: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.12838313s Nov 26 07:26:35.122: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:37.228: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.234317507s Nov 26 07:26:37.228: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:39.189: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.195260638s Nov 26 07:26:39.189: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:41.179: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.18530301s Nov 26 07:26:41.179: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:43.119: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.125555471s Nov 26 07:26:43.120: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:45.184: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.19028491s Nov 26 07:26:45.184: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:47.220: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.226344632s Nov 26 07:26:47.221: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:49.182: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.188205542s Nov 26 07:26:49.182: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:51.206: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.211792063s Nov 26 07:26:51.206: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:53.164: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.170029979s Nov 26 07:26:53.164: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:55.226: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.232053234s Nov 26 07:26:55.226: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:57.147: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.15327829s Nov 26 07:26:57.147: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:26:59.286: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.292359121s Nov 26 07:26:59.286: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:01.127: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.132846612s Nov 26 07:27:01.127: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:03.113: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.11949081s Nov 26 07:27:03.113: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:05.112: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.117649854s Nov 26 07:27:05.112: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:07.130: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.136282138s Nov 26 07:27:07.130: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:09.214: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.220472767s Nov 26 07:27:09.214: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:11.130: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.136266253s Nov 26 07:27:11.130: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:13.152: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.158247072s Nov 26 07:27:13.152: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:15.115: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.121332621s Nov 26 07:27:15.115: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:17.113: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.119283314s Nov 26 07:27:17.113: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:19.235: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.241571093s Nov 26 07:27:19.236: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:21.121: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.126716528s Nov 26 07:27:21.121: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:23.171: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.177066788s Nov 26 07:27:23.171: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:25.156: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.162016443s Nov 26 07:27:25.156: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:27.122: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.127818017s Nov 26 07:27:27.122: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:29.145: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.15098708s Nov 26 07:27:29.145: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:31.125: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.130709744s Nov 26 07:27:31.125: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:33.281: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.286857334s Nov 26 07:27:33.281: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:35.124: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.129684094s Nov 26 07:27:35.124: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:37.165: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.171000478s Nov 26 07:27:37.165: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:39.154: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.160192204s Nov 26 07:27:39.154: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:41.176: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.182376682s Nov 26 07:27:41.176: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:43.129: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.135025198s Nov 26 07:27:43.129: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:45.123: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.129249044s Nov 26 07:27:45.123: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:47.136: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.141725116s Nov 26 07:27:47.136: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:49.143: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.149384228s Nov 26 07:27:49.143: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:51.144: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.150510381s Nov 26 07:27:51.144: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:53.141: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.147012607s Nov 26 07:27:53.141: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:55.202: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.208288188s Nov 26 07:27:55.202: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:57.133: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.139169935s Nov 26 07:27:57.133: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:27:59.171: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.177067372s Nov 26 07:27:59.171: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:01.167: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.173391967s Nov 26 07:28:01.167: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:03.167: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.173530555s Nov 26 07:28:03.168: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:05.180: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.186353123s Nov 26 07:28:05.180: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:07.101: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.107385246s Nov 26 07:28:07.101: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:09.166: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.171987402s Nov 26 07:28:09.166: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:11.118: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.123625481s Nov 26 07:28:11.118: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:13.167: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.172879656s Nov 26 07:28:13.167: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:15.103: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.109139881s Nov 26 07:28:15.103: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:17.093: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.099343574s Nov 26 07:28:17.093: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:19.180: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.185905415s Nov 26 07:28:19.180: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:21.110: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.116043788s Nov 26 07:28:21.110: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:23.167: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.17299921s Nov 26 07:28:23.167: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:25.101: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.107024651s Nov 26 07:28:25.101: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:27.102: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.108213582s Nov 26 07:28:27.102: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:29.111: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.116886667s Nov 26 07:28:29.111: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:31.115: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.121499571s Nov 26 07:28:31.115: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:33.196: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.202354656s Nov 26 07:28:33.196: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:35.128: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.134382311s Nov 26 07:28:35.128: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:37.182: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.187764382s Nov 26 07:28:37.182: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:39.130: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.136212507s Nov 26 07:28:39.130: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:41.101: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.106862033s Nov 26 07:28:41.101: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:43.173: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.17942783s Nov 26 07:28:43.173: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:45.141: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.146936903s Nov 26 07:28:45.141: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:47.113: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.119006008s Nov 26 07:28:47.113: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:49.136: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.141839164s Nov 26 07:28:49.136: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:51.094: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.100061711s Nov 26 07:28:51.094: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:53.153: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.158579559s Nov 26 07:28:53.153: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:55.098: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.104366999s Nov 26 07:28:55.098: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:57.132: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.138103943s Nov 26 07:28:57.132: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:28:59.156: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.162378259s Nov 26 07:28:59.156: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:29:01.102: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.108409347s Nov 26 07:29:01.102: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:29:03.158: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.163907428s Nov 26 07:29:03.158: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:29:05.182: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.187990303s Nov 26 07:29:05.182: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:29:07.102: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.108123553s Nov 26 07:29:07.102: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:29:09.246: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.251705374s Nov 26 07:29:09.246: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:29:11.129: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.134624804s Nov 26 07:29:11.129: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command with --leave-stdin-open (Spec Runtime: 6m3.443s) test/e2e/kubectl/kubectl.go:585 In [BeforeEach] (Node Runtime: 5m0s) test/e2e/kubectl/kubectl.go:411 At [By Step] creating the pod from (Step Runtime: 5m0s) test/e2e/kubectl/kubectl.go:412 Spec Goroutine goroutine 1877 [chan receive, 6 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x801de88?, 0xc001c34820}, {0xc004482d10, 0xc}, {0xc0044ab5e0, 0x1, 0x1}, 0x45d964b800, 0x78965c0, {0x75ee704, ...}) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReady(...) test/e2e/framework/pod/resource.go:501 > k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc005022180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:29:13.167: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.17313006s Nov 26 07:29:13.167: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:29:13.271: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.277361789s Nov 26 07:29:13.271: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-svrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:27:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:12 +0000 UTC }] Nov 26 07:29:13.271: INFO: Pod httpd failed to be running and ready. Nov 26 07:29:13.271: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [httpd] Nov 26 07:29:13.272: FAIL: Expected <bool>: false to equal <bool>: true Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/26/22 07:29:13.272 Nov 26 07:29:13.272: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=kubectl-9054 delete --grace-period=0 --force -f -' Nov 26 07:29:13.715: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 26 07:29:13.715: INFO: stdout: "pod \"httpd\" force deleted\n" Nov 26 07:29:13.715: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=kubectl-9054 get rc,svc -l name=httpd --no-headers' Nov 26 07:29:14.049: INFO: stderr: "No resources found in kubectl-9054 namespace.\n" Nov 26 07:29:14.049: INFO: stdout: "" Nov 26 07:29:14.049: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=kubectl-9054 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 26 07:29:14.377: INFO: stderr: "" Nov 26 07:29:14.377: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 07:29:14.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 07:29:14.449 STEP: Collecting events from namespace "kubectl-9054". 11/26/22 07:29:14.449 STEP: Found 7 events. 11/26/22 07:29:14.504 Nov 26 07:29:14.504: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for httpd: { } Scheduled: Successfully assigned kubectl-9054/httpd to bootstrap-e2e-minion-group-svrn Nov 26 07:29:14.504: INFO: At 2022-11-26 07:24:13 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-svrn} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Nov 26 07:29:14.504: INFO: At 2022-11-26 07:24:13 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-svrn} Created: Created container httpd Nov 26 07:29:14.504: INFO: At 2022-11-26 07:24:13 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-svrn} Started: Started container httpd Nov 26 07:29:14.504: INFO: At 2022-11-26 07:24:23 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-svrn} Killing: Stopping container httpd Nov 26 07:29:14.504: INFO: At 2022-11-26 07:24:23 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-svrn} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 07:29:14.504: INFO: At 2022-11-26 07:24:26 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-svrn} BackOff: Back-off restarting failed container httpd in pod httpd_kubectl-9054(08fdfa3e-62dc-4777-bf5c-ee21f05942f6) Nov 26 07:29:14.563: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 07:29:14.563: INFO: Nov 26 07:29:14.644: INFO: Logging node info for node bootstrap-e2e-master Nov 26 07:29:14.729: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master f12dfba9-8340-4384-a012-464bb8ff014b 6314 0 2022-11-26 07:14:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 07:24:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:24:53 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:24:53 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:24:53 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:24:53 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.127.104.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4341b6df721ee06de14317c6e64c7913,SystemUUID:4341b6df-721e-e06d-e143-17c6e64c7913,BootID:0fd660c7-349c-4c78-8001-012f07790551,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 07:29:14.730: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 07:29:14.804: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 07:29:14.891: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 26 07:29:14.891: INFO: Logging node info for node bootstrap-e2e-minion-group-svrn Nov 26 07:29:14.948: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-svrn 0b46f31f-d25c-4604-ba86-b3e98c09449d 9143 0 2022-11-26 07:14:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-svrn kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-svrn topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-9402":"bootstrap-e2e-minion-group-svrn"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 07:24:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 07:29:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2022-11-26 07:29:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-svrn,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:26:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:26:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:26:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:26:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.23.98,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-svrn.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-svrn.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a792d55bc5ad5cdad144cb5b4dfa29f,SystemUUID:6a792d55-bc5a-d5cd-ad14-4cb5b4dfa29f,BootID:d19434b3-94eb-452d-a279-fc84362b7cab,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-9402^056006c3-6d5c-11ed-89b1-d2f4207fbe7a,DevicePath:,},},Config:nil,},} Nov 26 07:29:14.948: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-svrn Nov 26 07:29:15.008: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-svrn Nov 26 07:29:15.108: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-svrn: error trying to reach service: No agent available Nov 26 07:29:15.108: INFO: Logging node info for node bootstrap-e2e-minion-group-v6kp Nov 26 07:29:15.167: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-v6kp 1b4c00d7-9f80-4c8f-bcb4-5fdf079da6d6 8980 0 2022-11-26 07:14:26 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-v6kp kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-v6kp topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 07:24:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 07:28:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 07:28:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-v6kp,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:28:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:28:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:28:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:28:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.227.156.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-v6kp.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-v6kp.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:35b699b12f5019228f1e2e38d963976d,SystemUUID:35b699b1-2f50-1922-8f1e-2e38d963976d,BootID:5793a9ad-d1f5-4512-925a-2b321cb699ee,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-2652^5b9d621e-6d5a-11ed-bfab-ae8588c81627],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2652^5b9d621e-6d5a-11ed-bfab-ae8588c81627,DevicePath:,},},Config:nil,},} Nov 26 07:29:15.168: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-v6kp Nov 26 07:29:15.233: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-v6kp Nov 26 07:29:15.329: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-v6kp: error trying to reach service: No agent available Nov 26 07:29:15.329: INFO: Logging node info for node bootstrap-e2e-minion-group-zhjw Nov 26 07:29:15.456: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-zhjw 02d1b2e8-572a-4705-ba12-2a030476f45b 8714 0 2022-11-26 07:14:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-zhjw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-zhjw topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1972":"csi-mock-csi-mock-volumes-1972","csi-mock-csi-mock-volumes-9498":"bootstrap-e2e-minion-group-zhjw"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 07:24:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 07:26:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 07:28:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-zhjw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:28:37 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:28:37 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:28:37 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:28:37 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.105.36.0,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-zhjw.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-zhjw.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cc67b7d9c606cf13b518cf0cb8b22fe6,SystemUUID:cc67b7d9-c606-cf13-b518-cf0cb8b22fe6,BootID:a06198bc-32f7-4d08-b37d-b3aaad431e87,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 07:29:15.456: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-zhjw Nov 26 07:29:15.645: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-zhjw Nov 26 07:29:15.809: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-zhjw: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-9054" for this suite. 11/26/22 07:29:15.809
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never$'
test/e2e/kubectl/kubectl.go:415 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 There were additional failures detected after the initial failure: [FAILED] Nov 26 07:32:42.637: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=kubectl-432 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.127.104.189/api/v1/namespaces/kubectl-432/pods/httpd": dial tcp 34.127.104.189:443: connect: connection refused error: exit status 1 In [AfterEach] at: test/e2e/framework/kubectl/builder.go:87 ---------- [FAILED] Nov 26 07:32:42.717: failed to list events in namespace "kubectl-432": Get "https://34.127.104.189/api/v1/namespaces/kubectl-432/events": dial tcp 34.127.104.189:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 07:32:42.757: Couldn't delete ns: "kubectl-432": Delete "https://34.127.104.189/api/v1/namespaces/kubectl-432": dial tcp 34.127.104.189:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.127.104.189/api/v1/namespaces/kubectl-432", Err:(*net.OpError)(0xc003484410)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 07:32:13.287 Nov 26 07:32:13.287: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 07:32:13.289 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 07:32:13.545 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 07:32:13.723 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/26/22 07:32:13.872 Nov 26 07:32:13.872: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=kubectl-432 create -f -' Nov 26 07:32:14.386: INFO: stderr: "" Nov 26 07:32:14.386: INFO: stdout: "pod/httpd created\n" Nov 26 07:32:14.386: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 26 07:32:14.386: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-432" to be "running and ready" Nov 26 07:32:14.467: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 80.296409ms Nov 26 07:32:14.467: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:32:16.532: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.145302917s Nov 26 07:32:16.532: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:18.527: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.14084585s Nov 26 07:32:18.527: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:20.594: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.207713235s Nov 26 07:32:20.594: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:22.593: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.206926391s Nov 26 07:32:22.593: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:24.525: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.138973706s Nov 26 07:32:24.525: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:26.574: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.187918036s Nov 26 07:32:26.574: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:28.545: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.158909505s Nov 26 07:32:28.545: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:30.539: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.152704321s Nov 26 07:32:30.539: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:32.556: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.169905472s Nov 26 07:32:32.556: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:34.561: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.17427039s Nov 26 07:32:34.561: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:36.530: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.143356811s Nov 26 07:32:36.530: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:38.683: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.296842962s Nov 26 07:32:38.683: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:40.577: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.190714836s Nov 26 07:32:40.577: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-zhjw' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:32:14 +0000 UTC }] Nov 26 07:32:42.507: INFO: Encountered non-retryable error while getting pod kubectl-432/httpd: Get "https://34.127.104.189/api/v1/namespaces/kubectl-432/pods/httpd": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:42.507: INFO: Pod httpd failed to be running and ready. Nov 26 07:32:42.507: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [httpd] Nov 26 07:32:42.507: FAIL: Expected <bool>: false to equal <bool>: true Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/26/22 07:32:42.508 Nov 26 07:32:42.508: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=kubectl-432 delete --grace-period=0 --force -f -' Nov 26 07:32:42.637: INFO: rc: 1 Nov 26 07:32:42.637: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc001346010>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=kubectl-432 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://34.127.104.189/api/v1/namespaces/kubectl-432/pods/httpd\": dial tcp 34.127.104.189:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 26 07:32:42.637: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=kubectl-432 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.127.104.189/api/v1/namespaces/kubectl-432/pods/httpd": dial tcp 34.127.104.189:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc001be2420?, 0x0?}, {0xc000b835b0, 0xb}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc000b835b0, 0xb}, {0xc000106f20, 0x145}, {0xc003333ec0?, 0x8?, 0x7f83d8bd85b8?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc000106f20, 0x145}, {0xc000b835b0, 0xb}, {0xc0014e3180, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 07:32:42.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 07:32:42.677 STEP: Collecting events from namespace "kubectl-432". 11/26/22 07:32:42.677 Nov 26 07:32:42.717: INFO: Unexpected error: failed to list events in namespace "kubectl-432": <*url.Error | 0xc003493aa0>: { Op: "Get", URL: "https://34.127.104.189/api/v1/namespaces/kubectl-432/events", Err: <*net.OpError | 0xc002ff4ff0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003050c60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 127, 104, 189], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0035768a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 07:32:42.717: FAIL: failed to list events in namespace "kubectl-432": Get "https://34.127.104.189/api/v1/namespaces/kubectl-432/events": dial tcp 34.127.104.189:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc003b085c0, {0xc000b835b0, 0xb}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0034a4340}, {0xc000b835b0, 0xb}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc003b08650?, {0xc000b835b0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000a982d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00110c7f0?, 0xc0047a1fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0008c38e8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00110c7f0?, 0x29449fc?}, {0xae73300?, 0xc0047a1f80?, 0xc000a46300?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-432" for this suite. 11/26/22 07:32:42.717 Nov 26 07:32:42.757: FAIL: Couldn't delete ns: "kubectl-432": Delete "https://34.127.104.189/api/v1/namespaces/kubectl-432": dial tcp 34.127.104.189:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.127.104.189/api/v1/namespaces/kubectl-432", Err:(*net.OpError)(0xc003484410)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000a982d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00110c720?, 0xc0034820a8?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x71ad140?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00110c720?, 0x78965c0?}, {0xae73300?, 0xc0034a4340?, 0xc000b835b0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\shandle\supdates\sto\sExternalTrafficPolicy\sfield$'
test/e2e/network/loadbalancer.go:1535 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1535 +0x357from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 07:23:21.704 Nov 26 07:23:21.704: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 07:23:21.706 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 07:24:12.317 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 07:24:12.409 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should handle updates to ExternalTrafficPolicy field test/e2e/network/loadbalancer.go:1480 STEP: creating a service esipp-6505/external-local-update with type=LoadBalancer 11/26/22 07:24:12.818 STEP: setting ExternalTrafficPolicy=Local 11/26/22 07:24:12.818 STEP: waiting for loadbalancer for service esipp-6505/external-local-update 11/26/22 07:24:12.949 Nov 26 07:24:12.949: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: creating a pod to be part of the service external-local-update 11/26/22 07:24:47.084 Nov 26 07:24:47.311: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 07:24:47.384: INFO: Found all 1 pods Nov 26 07:24:47.384: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-update-ld697] Nov 26 07:24:47.384: INFO: Waiting up to 2m0s for pod "external-local-update-ld697" in namespace "esipp-6505" to be "running and ready" Nov 26 07:24:47.476: INFO: Pod "external-local-update-ld697": Phase="Pending", Reason="", readiness=false. Elapsed: 91.832385ms Nov 26 07:24:47.476: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-ld697' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:24:49.585: INFO: Pod "external-local-update-ld697": Phase="Running", Reason="", readiness=true. Elapsed: 2.200826947s Nov 26 07:24:49.585: INFO: Pod "external-local-update-ld697" satisfied condition "running and ready" Nov 26 07:24:49.585: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-update-ld697] STEP: waiting for loadbalancer for service esipp-6505/external-local-update 11/26/22 07:24:49.585 Nov 26 07:24:49.585: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: turning ESIPP off 11/26/22 07:24:49.657 Nov 26 07:24:51.010: FAIL: Expected <int>: 0 not to equal <int>: 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1535 +0x357 Nov 26 07:24:51.176: INFO: Waiting up to 15m0s for service "external-local-update" to have no LoadBalancer [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 07:25:01.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 07:25:01.543: INFO: Output of kubectl describe svc: Nov 26 07:25:01.543: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-6505 describe svc --namespace=esipp-6505' Nov 26 07:25:01.986: INFO: stderr: "" Nov 26 07:25:01.986: INFO: stdout: "Name: external-local-update\nNamespace: esipp-6505\nLabels: testid=external-local-update-632f5edf-abcd-4491-a47f-a4a80e50bfcb\nAnnotations: <none>\nSelector: testid=external-local-update-632f5edf-abcd-4491-a47f-a4a80e50bfcb\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.227.56\nIPs: 10.0.227.56\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nEndpoints: 10.64.2.83:80\nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 49s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 15s service-controller Ensured load balancer\n Normal ExternalTrafficPolicy 12s service-controller Local -> Cluster\n Normal Type 10s service-controller LoadBalancer -> ClusterIP\n" Nov 26 07:25:01.986: INFO: Name: external-local-update Namespace: esipp-6505 Labels: testid=external-local-update-632f5edf-abcd-4491-a47f-a4a80e50bfcb Annotations: <none> Selector: testid=external-local-update-632f5edf-abcd-4491-a47f-a4a80e50bfcb Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.227.56 IPs: 10.0.227.56 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.64.2.83:80 Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 49s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 15s service-controller Ensured load balancer Normal ExternalTrafficPolicy 12s service-controller Local -> Cluster Normal Type 10s service-controller LoadBalancer -> ClusterIP [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 07:25:01.986 STEP: Collecting events from namespace "esipp-6505". 11/26/22 07:25:01.986 STEP: Found 11 events. 11/26/22 07:25:02.035 Nov 26 07:25:02.036: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for external-local-update-ld697: { } Scheduled: Successfully assigned esipp-6505/external-local-update-ld697 to bootstrap-e2e-minion-group-zhjw Nov 26 07:25:02.036: INFO: At 2022-11-26 07:24:12 +0000 UTC - event for external-local-update: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 07:25:02.036: INFO: At 2022-11-26 07:24:46 +0000 UTC - event for external-local-update: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 07:25:02.036: INFO: At 2022-11-26 07:24:47 +0000 UTC - event for external-local-update: {replication-controller } SuccessfulCreate: Created pod: external-local-update-ld697 Nov 26 07:25:02.036: INFO: At 2022-11-26 07:24:48 +0000 UTC - event for external-local-update-ld697: {kubelet bootstrap-e2e-minion-group-zhjw} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 07:25:02.036: INFO: At 2022-11-26 07:24:48 +0000 UTC - event for external-local-update-ld697: {kubelet bootstrap-e2e-minion-group-zhjw} Created: Created container netexec Nov 26 07:25:02.036: INFO: At 2022-11-26 07:24:48 +0000 UTC - event for external-local-update-ld697: {kubelet bootstrap-e2e-minion-group-zhjw} Started: Started container netexec Nov 26 07:25:02.036: INFO: At 2022-11-26 07:24:49 +0000 UTC - event for external-local-update: {service-controller } ExternalTrafficPolicy: Local -> Cluster Nov 26 07:25:02.036: INFO: At 2022-11-26 07:24:49 +0000 UTC - event for external-local-update-ld697: {kubelet bootstrap-e2e-minion-group-zhjw} Killing: Stopping container netexec Nov 26 07:25:02.036: INFO: At 2022-11-26 07:24:50 +0000 UTC - event for external-local-update-ld697: {kubelet bootstrap-e2e-minion-group-zhjw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 07:25:02.036: INFO: At 2022-11-26 07:24:51 +0000 UTC - event for external-local-update: {service-controller } Type: LoadBalancer -> ClusterIP Nov 26 07:25:02.085: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 07:25:02.085: INFO: external-local-update-ld697 bootstrap-e2e-minion-group-zhjw Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:24:47 +0000 UTC }] Nov 26 07:25:02.085: INFO: Nov 26 07:25:02.243: INFO: Unable to fetch esipp-6505/external-local-update-ld697/netexec logs: an error on the server ("unknown") has prevented the request from succeeding (get pods external-local-update-ld697) Nov 26 07:25:02.307: INFO: Logging node info for node bootstrap-e2e-master Nov 26 07:25:02.369: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master f12dfba9-8340-4384-a012-464bb8ff014b 6314 0 2022-11-26 07:14:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 07:24:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:24:53 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:24:53 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:24:53 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:24:53 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.127.104.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4341b6df721ee06de14317c6e64c7913,SystemUUID:4341b6df-721e-e06d-e143-17c6e64c7913,BootID:0fd660c7-349c-4c78-8001-012f07790551,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 07:25:02.369: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 07:25:02.434: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 07:25:02.558: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 26 07:25:02.558: INFO: Logging node info for node bootstrap-e2e-minion-group-svrn Nov 26 07:25:02.628: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-svrn 0b46f31f-d25c-4604-ba86-b3e98c09449d 6273 0 2022-11-26 07:14:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-svrn kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 07:14:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 07:20:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-26 07:24:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-svrn,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:20:30 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:20:30 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:20:30 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:20:30 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.23.98,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-svrn.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-svrn.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a792d55bc5ad5cdad144cb5b4dfa29f,SystemUUID:6a792d55-bc5a-d5cd-ad14-4cb5b4dfa29f,BootID:d19434b3-94eb-452d-a279-fc84362b7cab,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 07:25:02.629: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-svrn Nov 26 07:25:02.695: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-svrn Nov 26 07:25:02.797: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-svrn: error trying to reach service: No agent available Nov 26 07:25:02.797: INFO: Logging node info for node bootstrap-e2e-minion-group-v6kp Nov 26 07:25:02.857: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-v6kp 1b4c00d7-9f80-4c8f-bcb4-5fdf079da6d6 6057 0 2022-11-26 07:14:26 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-v6kp kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-v6kp topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-483":"bootstrap-e2e-minion-group-v6kp","csi-hostpath-multivolume-9374":"bootstrap-e2e-minion-group-v6kp","csi-hostpath-provisioning-2652":"bootstrap-e2e-minion-group-v6kp","csi-hostpath-provisioning-7222":"bootstrap-e2e-minion-group-v6kp","csi-mock-csi-mock-volumes-5231":"bootstrap-e2e-minion-group-v6kp","csi-mock-csi-mock-volumes-8728":"csi-mock-csi-mock-volumes-8728","csi-mock-csi-mock-volumes-9178":"csi-mock-csi-mock-volumes-9178"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 07:21:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 07:24:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {node-problem-detector Update v1 2022-11-26 07:24:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-v6kp,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:31 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:21:18 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:21:18 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:21:18 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:21:18 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.227.156.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-v6kp.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-v6kp.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:35b699b12f5019228f1e2e38d963976d,SystemUUID:35b699b1-2f50-1922-8f1e-2e38d963976d,BootID:5793a9ad-d1f5-4512-925a-2b321cb699ee,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-2652^5b9d621e-6d5a-11ed-bfab-ae8588c81627 kubernetes.io/csi/csi-hostpath-provisioning-7531^5ba7aec8-6d5a-11ed-8f3e-8ec752fd93c1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2652^5b9d621e-6d5a-11ed-bfab-ae8588c81627,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7531^5ba7aec8-6d5a-11ed-8f3e-8ec752fd93c1,DevicePath:,},},Config:nil,},} Nov 26 07:25:02.857: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-v6kp Nov 26 07:25:03.013: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-v6kp Nov 26 07:25:03.296: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-v6kp: error trying to reach service: No agent available Nov 26 07:25:03.296: INFO: Logging node info for node bootstrap-e2e-minion-group-zhjw Nov 26 07:25:03.563: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-zhjw 02d1b2e8-572a-4705-ba12-2a030476f45b 6163 0 2022-11-26 07:14:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-zhjw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-zhjw topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-398":"csi-mock-csi-mock-volumes-398"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 07:19:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 07:24:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 07:24:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-zhjw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:24:33 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:21:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:21:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:21:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:21:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.105.36.0,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-zhjw.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-zhjw.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cc67b7d9c606cf13b518cf0cb8b22fe6,SystemUUID:cc67b7d9-c606-cf13-b518-cf0cb8b22fe6,BootID:a06198bc-32f7-4d08-b37d-b3aaad431e87,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 07:25:03.564: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-zhjw Nov 26 07:25:03.756: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-zhjw Nov 26 07:25:03.929: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-zhjw: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-6505" for this suite. 11/26/22 07:25:03.93
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\sonly\starget\snodes\swith\sendpoints$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002e00e0, {0x75c6f7c, 0x9}, 0xc00201dbc0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002e00e0, 0x7febbc154bb8?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002e00e0, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc00117e000, {0x0, 0x0, 0xc000d654a0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 There were additional failures detected after the initial failure: [FAILED] Nov 26 07:38:43.960: failed to list events in namespace "esipp-5578": Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/events": dial tcp 34.127.104.189:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 07:38:44.000: Couldn't delete ns: "esipp-5578": Delete "https://34.127.104.189/api/v1/namespaces/esipp-5578": dial tcp 34.127.104.189:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.127.104.189/api/v1/namespaces/esipp-5578", Err:(*net.OpError)(0xc0040613b0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 07:29:21.599 Nov 26 07:29:21.599: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 07:29:21.6 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 07:29:21.897 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 07:29:22.046 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should only target nodes with endpoints test/e2e/network/loadbalancer.go:1346 STEP: creating a service esipp-5578/external-local-nodes with type=LoadBalancer 11/26/22 07:29:22.562 STEP: setting ExternalTrafficPolicy=Local 11/26/22 07:29:22.562 STEP: waiting for loadbalancer for service esipp-5578/external-local-nodes 11/26/22 07:29:22.938 Nov 26 07:29:22.939: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer Nov 26 07:32:43.034: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:45.034: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:47.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:49.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:51.034: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:53.034: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:55.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:57.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:59.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:01.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:03.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:05.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:07.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:09.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:11.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:13.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:15.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:17.034: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:19.034: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:21.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:23.035: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:25.034: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:27.034: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:29.034: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m0.837s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5578/external-local-nodes (Step Runtime: 4m59.498s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0004ad440, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0043af0e0?, 0xc0052bfa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0012c11b0?, 0x7fa7740?, 0xc000202b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc001f4f9a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc001f4f9a0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc001f4f9a0, 0x6aba880?, 0xc0052bfd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc001f4f9a0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m20.843s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m20.006s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5578/external-local-nodes (Step Runtime: 5m19.503s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0004ad440, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0043af0e0?, 0xc0052bfa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0012c11b0?, 0x7fa7740?, 0xc000202b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc001f4f9a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc001f4f9a0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc001f4f9a0, 0x6aba880?, 0xc0052bfd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc001f4f9a0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m40.847s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m40.01s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5578/external-local-nodes (Step Runtime: 5m39.508s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0004ad440, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0043af0e0?, 0xc0052bfa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0012c11b0?, 0x7fa7740?, 0xc000202b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc001f4f9a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc001f4f9a0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc001f4f9a0, 0x6aba880?, 0xc0052bfd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc001f4f9a0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 6m0.851s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m0.014s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5578/external-local-nodes (Step Runtime: 5m59.512s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0004ad440, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0043af0e0?, 0xc0052bfa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0012c11b0?, 0x7fa7740?, 0xc000202b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc001f4f9a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc001f4f9a0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc001f4f9a0, 0x6aba880?, 0xc0052bfd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc001f4f9a0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 6m20.854s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m20.017s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5578/external-local-nodes (Step Runtime: 6m19.514s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0004ad440, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0043af0e0?, 0xc0052bfa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0012c11b0?, 0x7fa7740?, 0xc000202b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc001f4f9a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc001f4f9a0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc001f4f9a0, 0x6aba880?, 0xc0052bfd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc001f4f9a0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 6m40.856s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m40.019s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5578/external-local-nodes (Step Runtime: 6m39.517s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0004ad440, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0043af0e0?, 0xc0052bfa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0012c11b0?, 0x7fa7740?, 0xc000202b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc001f4f9a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc001f4f9a0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc001f4f9a0, 0x6aba880?, 0xc0052bfd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc001f4f9a0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m0.859s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m0.022s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5578/external-local-nodes (Step Runtime: 6m59.52s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0004ad440, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0043af0e0?, 0xc0052bfa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0012c11b0?, 0x7fa7740?, 0xc000202b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc001f4f9a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc001f4f9a0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc001f4f9a0, 0x6aba880?, 0xc0052bfd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc001f4f9a0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m20.861s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m20.024s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5578/external-local-nodes (Step Runtime: 7m19.522s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0004ad440, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0043af0e0?, 0xc0052bfa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0012c11b0?, 0x7fa7740?, 0xc000202b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc001f4f9a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc001f4f9a0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc001f4f9a0, 0x6aba880?, 0xc0052bfd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc001f4f9a0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m40.864s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m40.027s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5578/external-local-nodes (Step Runtime: 7m39.525s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0004ad440, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0043af0e0?, 0xc0052bfa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0012c11b0?, 0x7fa7740?, 0xc000202b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc001f4f9a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc001f4f9a0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc001f4f9a0, 0x6aba880?, 0xc0052bfd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc001f4f9a0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m0.866s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m0.029s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5578/external-local-nodes (Step Runtime: 7m59.527s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0004ad440, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0043af0e0?, 0xc0052bfa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0012c11b0?, 0x7fa7740?, 0xc000202b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc001f4f9a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc001f4f9a0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc001f4f9a0, 0x6aba880?, 0xc0052bfd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc001f4f9a0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ STEP: waiting for loadbalancer for service esipp-5578/external-local-nodes 11/26/22 07:37:35.06 Nov 26 07:37:35.060: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: Performing setup for networking test in namespace esipp-5578 11/26/22 07:37:35.122 STEP: creating a selector 11/26/22 07:37:35.122 STEP: Creating the service pods in kubernetes 11/26/22 07:37:35.122 Nov 26 07:37:35.122: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 07:37:35.577: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-5578" to be "running and ready" Nov 26 07:37:35.645: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 68.257975ms Nov 26 07:37:35.645: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 07:37:37.700: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.123391085s Nov 26 07:37:37.701: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:37:39.706: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.128536316s Nov 26 07:37:39.706: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:37:41.707: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.129601155s Nov 26 07:37:41.707: INFO: The phase of Pod netserver-0 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m20.868s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m20.031s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 7.345s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0019f4ee8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000a3b5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000f5fa00}, {0xc003296ca0, 0xa}, {0xc00446cc10, 0xb}, {0x75ee704, 0x11}, 0x7f8f401?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000f5fa00?}, {0xc00446cc10?, 0xc003380f60?}, {0xc003296ca0?, 0xc0052bf820?}, 0x271e5fe?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002e00e0, {0x75c6f7c, 0x9}, 0xc00201dbc0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002e00e0, 0x7febbc154bb8?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002e00e0, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc00117e000, {0x0, 0x0, 0xc000d654a0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:37:43.717: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.140315834s Nov 26 07:37:43.717: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:37:45.716: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.139060167s Nov 26 07:37:45.716: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:37:47.783: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.206298943s Nov 26 07:37:47.783: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:37:49.754: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.176883418s Nov 26 07:37:49.754: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:37:51.719: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.141514091s Nov 26 07:37:51.719: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:37:53.734: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.156462811s Nov 26 07:37:53.734: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:37:55.705: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.127485411s Nov 26 07:37:55.705: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:37:57.716: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.138782169s Nov 26 07:37:57.716: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:37:59.766: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.189158383s Nov 26 07:37:59.766: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:01.712: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.134758185s Nov 26 07:38:01.712: INFO: The phase of Pod netserver-0 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m40.871s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m40.034s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 27.348s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0019f4ee8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000a3b5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000f5fa00}, {0xc003296ca0, 0xa}, {0xc00446cc10, 0xb}, {0x75ee704, 0x11}, 0x7f8f401?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000f5fa00?}, {0xc00446cc10?, 0xc003380f60?}, {0xc003296ca0?, 0xc0052bf820?}, 0x271e5fe?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002e00e0, {0x75c6f7c, 0x9}, 0xc00201dbc0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002e00e0, 0x7febbc154bb8?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002e00e0, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc00117e000, {0x0, 0x0, 0xc000d654a0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:38:03.824: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.246678907s Nov 26 07:38:03.824: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:05.733: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.15552793s Nov 26 07:38:05.733: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:07.776: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.198526406s Nov 26 07:38:07.776: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:09.752: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.174738563s Nov 26 07:38:09.752: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:11.704: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.127123271s Nov 26 07:38:11.704: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:13.852: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.274924397s Nov 26 07:38:13.852: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:15.719: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.142326391s Nov 26 07:38:15.719: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:17.753: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.176178781s Nov 26 07:38:17.753: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:19.765: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.187591813s Nov 26 07:38:19.765: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:21.714: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.136933564s Nov 26 07:38:21.714: INFO: The phase of Pod netserver-0 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 9m0.874s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 9m0.037s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 47.351s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0019f4ee8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000a3b5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000f5fa00}, {0xc003296ca0, 0xa}, {0xc00446cc10, 0xb}, {0x75ee704, 0x11}, 0x7f8f401?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000f5fa00?}, {0xc00446cc10?, 0xc003380f60?}, {0xc003296ca0?, 0xc0052bf820?}, 0x271e5fe?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002e00e0, {0x75c6f7c, 0x9}, 0xc00201dbc0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002e00e0, 0x7febbc154bb8?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002e00e0, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc00117e000, {0x0, 0x0, 0xc000d654a0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:38:23.745: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.167769706s Nov 26 07:38:23.745: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:25.753: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.175898502s Nov 26 07:38:25.753: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:27.711: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.134369601s Nov 26 07:38:27.711: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:29.703: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.126110807s Nov 26 07:38:29.703: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:31.705: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.128124106s Nov 26 07:38:31.705: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:33.720: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.142809329s Nov 26 07:38:33.720: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:35.774: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.196433677s Nov 26 07:38:35.774: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:37.765: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.187786166s Nov 26 07:38:37.765: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:39.738: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.160630358s Nov 26 07:38:39.738: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 07:38:41.697: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.120192722s Nov 26 07:38:41.697: INFO: The phase of Pod netserver-0 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 9m20.877s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 9m20.04s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 1m7.354s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 1674 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0019f4ee8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000a3b5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000f5fa00}, {0xc003296ca0, 0xa}, {0xc00446cc10, 0xb}, {0x75ee704, 0x11}, 0x7f8f401?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000f5fa00?}, {0xc00446cc10?, 0xc003380f60?}, {0xc003296ca0?, 0xc0052bf820?}, 0x271e5fe?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002e00e0, {0x75c6f7c, 0x9}, 0xc00201dbc0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002e00e0, 0x7febbc154bb8?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002e00e0, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc00117e000, {0x0, 0x0, 0xc000d654a0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0014b9680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:38:43.685: INFO: Encountered non-retryable error while getting pod esipp-5578/netserver-0: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/pods/netserver-0": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:43.686: INFO: Unexpected error: <*fmt.wrapError | 0xc0012f3800>: { msg: "error while waiting for pod esipp-5578/netserver-0 to be running and ready: Get \"https://34.127.104.189/api/v1/namespaces/esipp-5578/pods/netserver-0\": dial tcp 34.127.104.189:443: connect: connection refused", err: <*url.Error | 0xc0040c97a0>{ Op: "Get", URL: "https://34.127.104.189/api/v1/namespaces/esipp-5578/pods/netserver-0", Err: <*net.OpError | 0xc003571770>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0020c0a50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 127, 104, 189], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0012f37c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 26 07:38:43.686: FAIL: error while waiting for pod esipp-5578/netserver-0 to be running and ready: Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/pods/netserver-0": dial tcp 34.127.104.189:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002e00e0, {0x75c6f7c, 0x9}, 0xc00201dbc0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002e00e0, 0x7febbc154bb8?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002e00e0, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc00117e000, {0x0, 0x0, 0xc000d654a0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 Nov 26 07:38:43.729: INFO: Unexpected error: <*errors.errorString | 0xc00112ff80>: { s: "failed to get Service \"external-local-nodes\": Get \"https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes\": dial tcp 34.127.104.189:443: connect: connection refused", } Nov 26 07:38:43.729: FAIL: failed to get Service "external-local-nodes": Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/services/external-local-nodes": dial tcp 34.127.104.189:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.5.2() test/e2e/network/loadbalancer.go:1366 +0xae panic({0x70eb7e0, 0xc000254a80}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc002dda0d0, 0xd0}, {0xc000a3b700?, 0xc002dda0d0?, 0xc000a3b728?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3f20, 0xc0012f3800}, {0x0?, 0xc003296ca0?, 0xc0052bf820?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002e00e0, {0x75c6f7c, 0x9}, 0xc00201dbc0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002e00e0, 0x7febbc154bb8?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002e00e0, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc00117e000, {0x0, 0x0, 0xc000d654a0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 07:38:43.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 07:38:43.769: INFO: Output of kubectl describe svc: Nov 26 07:38:43.769: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5578 describe svc --namespace=esipp-5578' Nov 26 07:38:43.920: INFO: rc: 1 Nov 26 07:38:43.920: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 07:38:43.92 STEP: Collecting events from namespace "esipp-5578". 11/26/22 07:38:43.921 Nov 26 07:38:43.960: INFO: Unexpected error: failed to list events in namespace "esipp-5578": <*url.Error | 0xc0040c9b30>: { Op: "Get", URL: "https://34.127.104.189/api/v1/namespaces/esipp-5578/events", Err: <*net.OpError | 0xc003571860>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0020c1260>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 127, 104, 189], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0012f39e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 07:38:43.960: FAIL: failed to list events in namespace "esipp-5578": Get "https://34.127.104.189/api/v1/namespaces/esipp-5578/events": dial tcp 34.127.104.189:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0024be5c0, {0xc003296ca0, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc000f5fa00}, {0xc003296ca0, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0024be650?, {0xc003296ca0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00117e000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001081730?, 0xc003c61f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001081730?, 0x7fadfa0?}, {0xae73300?, 0xc003c61f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-5578" for this suite. 11/26/22 07:38:43.961 Nov 26 07:38:44.000: FAIL: Couldn't delete ns: "esipp-5578": Delete "https://34.127.104.189/api/v1/namespaces/esipp-5578": dial tcp 34.127.104.189:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.127.104.189/api/v1/namespaces/esipp-5578", Err:(*net.OpError)(0xc0040613b0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00117e000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001081620?, 0xc0022e8fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001081620?, 0x0?}, {0xae73300?, 0x5?, 0xc003186b10?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=LoadBalancer$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000cf4000) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 07:32:43.625 Nov 26 07:32:43.625: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 07:32:43.627 Nov 26 07:32:43.666: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:45.706: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:47.706: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:49.707: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:51.707: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:53.706: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:55.707: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:57.706: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:32:59.706: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:01.706: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:03.706: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:05.706: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:07.706: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:09.707: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:11.707: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:13.706: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:13.745: INFO: Unexpected error while creating namespace: Post "https://34.127.104.189/api/v1/namespaces": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:33:13.745: INFO: Unexpected error: <*errors.errorString | 0xc000195d70>: { s: "timed out waiting for the condition", } Nov 26 07:33:13.745: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000cf4000) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 07:33:13.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 07:33:13.785 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfrom\spods$'
test/e2e/network/loadbalancer.go:1476 k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1476 +0xabdfrom junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 07:37:22.965 Nov 26 07:37:22.965: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 07:37:22.967 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 07:37:23.194 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 07:37:23.295 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work from pods test/e2e/network/loadbalancer.go:1422 STEP: creating a service esipp-5833/external-local-pods with type=LoadBalancer 11/26/22 07:37:23.535 STEP: setting ExternalTrafficPolicy=Local 11/26/22 07:37:23.535 STEP: waiting for loadbalancer for service esipp-5833/external-local-pods 11/26/22 07:37:23.704 Nov 26 07:37:23.704: INFO: Waiting up to 15m0s for service "external-local-pods" to have a LoadBalancer Nov 26 07:38:43.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:45.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:47.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:49.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:51.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:53.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:55.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:57.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:59.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:01.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:03.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:05.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:07.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:09.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:11.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:13.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:15.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:17.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:19.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:21.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:23.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:25.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:27.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:29.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:31.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:33.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:35.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:37.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:39.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:41.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:43.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:45.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:47.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:49.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:51.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:53.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:55.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:57.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 5m0.571s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 4m59.831s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 5m20.573s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 5m20.002s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 5m19.833s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 5m40.575s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 5m40.004s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 5m39.835s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 6m0.577s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 6m0.006s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 5m59.837s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 6m20.58s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 6m20.009s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 6m19.84s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 6m40.582s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 6m40.012s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 6m39.842s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 7m0.585s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 7m0.014s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 6m59.845s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:44:39.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:44:41.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 7m20.587s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 7m20.017s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 7m19.847s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:44:43.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:44:45.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:44:47.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:44:49.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:44:51.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:44:53.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:44:55.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:44:57.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:44:59.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:01.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:03.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 7m41.557s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 7m40.986s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 7m40.817s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:45:05.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:07.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:09.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:11.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:13.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:15.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:17.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:19.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:21.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:23.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 8m1.559s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 8m0.989s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 8m0.82s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:45:25.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:27.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:29.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:31.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:33.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:35.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:37.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:39.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:41.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:43.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 8m21.562s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 8m20.991s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 8m20.822s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:45:45.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:47.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:49.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:51.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:53.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:55.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:57.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:45:59.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:01.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:03.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 8m41.563s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 8m40.993s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 8m40.823s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:46:05.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:07.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:09.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:11.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:13.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:15.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:17.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:19.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:21.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:23.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 9m1.566s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 9m0.995s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 9m0.826s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:46:25.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:27.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:29.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:31.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:33.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:35.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:37.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:39.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:41.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:43.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 9m21.569s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 9m20.998s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 9m20.829s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:46:45.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:47.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:49.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:51.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:53.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:55.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:57.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:46:59.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:01.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:03.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 9m41.571s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 9m41s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 9m40.831s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:47:05.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:07.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:09.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:11.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:13.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:15.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:17.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:19.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:21.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:23.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 10m1.573s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 10m1.002s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 10m0.833s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:47:25.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:27.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:29.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:31.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:33.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:35.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:37.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:39.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:41.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:43.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 10m21.575s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 10m21.004s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 10m20.835s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:47:45.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:47.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:49.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:51.825: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:47:53.826: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.127.104.189/api/v1/namespaces/esipp-5833/services/external-local-pods": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 10m41.577s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 10m41.007s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 10m40.837s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 11m1.58s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 11m1.009s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 11m0.84s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 11m21.582s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 11m21.012s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 11m20.842s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 11m41.585s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 11m41.014s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 11m40.845s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 12m1.587s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 12m1.017s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 12m0.847s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 12m21.59s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 12m21.019s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 12m20.85s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 12m41.592s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 12m41.022s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 12m40.852s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 13m1.596s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 13m1.025s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 13m0.856s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 13m21.598s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 13m21.028s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-5833/external-local-pods (Step Runtime: 13m20.858s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0049924c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00292d620?, 0xc0053b5a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc004e9e100?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0053ad900, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0053ad900, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0053ad900, 0x6aba880?, 0xc0053b5cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0053ad900, 0xc0025931e0?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ STEP: creating a pod to be part of the service external-local-pods 11/26/22 07:50:49.837 Nov 26 07:50:49.921: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 07:50:49.967: INFO: Found all 1 pods Nov 26 07:50:49.967: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-pods-nhgcw] Nov 26 07:50:49.967: INFO: Waiting up to 2m0s for pod "external-local-pods-nhgcw" in namespace "esipp-5833" to be "running and ready" Nov 26 07:50:50.033: INFO: Pod "external-local-pods-nhgcw": Phase="Pending", Reason="", readiness=false. Elapsed: 66.624404ms Nov 26 07:50:50.033: INFO: Error evaluating pod condition running and ready: want pod 'external-local-pods-nhgcw' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:50:52.101: INFO: Pod "external-local-pods-nhgcw": Phase="Running", Reason="", readiness=true. Elapsed: 2.133802227s Nov 26 07:50:52.101: INFO: Pod "external-local-pods-nhgcw" satisfied condition "running and ready" Nov 26 07:50:52.101: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-pods-nhgcw] STEP: waiting for loadbalancer for service esipp-5833/external-local-pods 11/26/22 07:50:52.101 Nov 26 07:50:52.101: INFO: Waiting up to 15m0s for service "external-local-pods" to have a LoadBalancer STEP: Creating pause pod deployment to make sure, pausePods are in desired state 11/26/22 07:50:52.169 Nov 26 07:50:52.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 26, 7, 50, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 26, 7, 50, 52, 0, time.Local), Reason:"NewReplicaSetCreated", Message:"Created new replica set \"pause-pod-deployment-5d788b4b5\""}}, CollisionCount:(*int32)(nil)} Nov 26 07:50:54.555: INFO: Waiting up to 5m0s curl 34.105.76.113:80/clientip STEP: Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw 11/26/22 07:50:54.656 Nov 26 07:50:54.656: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:50:55.343: INFO: rc: 7 Nov 26 07:50:55.343: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:50:57.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:50:58.080: INFO: rc: 7 Nov 26 07:50:58.080: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:50:59.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:00.021: INFO: rc: 7 Nov 26 07:51:00.021: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:01.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:01.854: INFO: rc: 7 Nov 26 07:51:01.854: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:03.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:03.875: INFO: rc: 7 Nov 26 07:51:03.875: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 13m41.6s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 13m41.029s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 9.908s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:51:05.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:05.874: INFO: rc: 7 Nov 26 07:51:05.874: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:07.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:07.903: INFO: rc: 7 Nov 26 07:51:07.903: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:09.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:09.855: INFO: rc: 7 Nov 26 07:51:09.855: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:11.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:11.911: INFO: rc: 7 Nov 26 07:51:11.911: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:13.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:13.859: INFO: rc: 7 Nov 26 07:51:13.859: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:15.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:15.932: INFO: rc: 7 Nov 26 07:51:15.932: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:17.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:17.916: INFO: rc: 7 Nov 26 07:51:17.916: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:19.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:19.879: INFO: rc: 7 Nov 26 07:51:19.879: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:21.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:21.909: INFO: rc: 7 Nov 26 07:51:21.909: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:23.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:23.892: INFO: rc: 7 Nov 26 07:51:23.892: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 14m1.603s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 14m1.032s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 29.911s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:51:25.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:25.846: INFO: rc: 7 Nov 26 07:51:25.846: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:27.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:27.910: INFO: rc: 7 Nov 26 07:51:27.911: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:29.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:29.855: INFO: rc: 7 Nov 26 07:51:29.855: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:31.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:31.974: INFO: rc: 7 Nov 26 07:51:31.974: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:33.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:33.924: INFO: rc: 7 Nov 26 07:51:33.924: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:35.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:35.852: INFO: rc: 7 Nov 26 07:51:35.852: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:37.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:37.895: INFO: rc: 7 Nov 26 07:51:37.895: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:39.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:39.878: INFO: rc: 7 Nov 26 07:51:39.878: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:41.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:42.874: INFO: rc: 7 Nov 26 07:51:42.874: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:43.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:43.863: INFO: rc: 7 Nov 26 07:51:43.863: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 14m21.605s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 14m21.034s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 49.913s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:51:45.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:45.919: INFO: rc: 7 Nov 26 07:51:45.919: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:47.343: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:47.843: INFO: rc: 7 Nov 26 07:51:47.843: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:49.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:49.855: INFO: rc: 7 Nov 26 07:51:49.855: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:51.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:51.865: INFO: rc: 7 Nov 26 07:51:51.865: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:53.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:53.866: INFO: rc: 7 Nov 26 07:51:53.866: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:55.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:55.852: INFO: rc: 7 Nov 26 07:51:55.852: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:57.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:57.851: INFO: rc: 7 Nov 26 07:51:57.851: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:51:59.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:51:59.856: INFO: rc: 7 Nov 26 07:51:59.856: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:01.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:01.863: INFO: rc: 7 Nov 26 07:52:01.863: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:03.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:03.856: INFO: rc: 7 Nov 26 07:52:03.856: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 14m41.607s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 14m41.037s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 1m9.915s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:52:05.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:05.851: INFO: rc: 7 Nov 26 07:52:05.851: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:07.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:07.853: INFO: rc: 7 Nov 26 07:52:07.853: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:09.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:09.859: INFO: rc: 7 Nov 26 07:52:09.859: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:11.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:11.687: INFO: rc: 1 Nov 26 07:52:11.687: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:52:13.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:13.702: INFO: rc: 1 Nov 26 07:52:13.702: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:52:15.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:15.778: INFO: rc: 1 Nov 26 07:52:15.778: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:52:17.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:17.688: INFO: rc: 1 Nov 26 07:52:17.688: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:52:19.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:19.699: INFO: rc: 1 Nov 26 07:52:19.699: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:52:21.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:21.695: INFO: rc: 1 Nov 26 07:52:21.695: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:52:23.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:23.709: INFO: rc: 1 Nov 26 07:52:23.709: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 15m1.61s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 15m1.039s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 1m29.918s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:52:25.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:25.688: INFO: rc: 1 Nov 26 07:52:25.688: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:52:27.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:27.724: INFO: rc: 1 Nov 26 07:52:27.724: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:52:29.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:29.708: INFO: rc: 1 Nov 26 07:52:29.708: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:52:31.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:31.856: INFO: rc: 7 Nov 26 07:52:31.856: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:33.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:33.888: INFO: rc: 7 Nov 26 07:52:33.889: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:35.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:35.850: INFO: rc: 7 Nov 26 07:52:35.850: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:37.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:37.915: INFO: rc: 7 Nov 26 07:52:37.915: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:39.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:39.877: INFO: rc: 7 Nov 26 07:52:39.877: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:41.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:41.853: INFO: rc: 7 Nov 26 07:52:41.853: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:43.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:43.946: INFO: rc: 7 Nov 26 07:52:43.946: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 15m21.612s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 15m21.041s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 1m49.92s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:52:45.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:45.849: INFO: rc: 7 Nov 26 07:52:45.849: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:47.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:47.916: INFO: rc: 7 Nov 26 07:52:47.916: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:49.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:49.876: INFO: rc: 7 Nov 26 07:52:49.876: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:51.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:51.848: INFO: rc: 7 Nov 26 07:52:51.848: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:53.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:53.939: INFO: rc: 7 Nov 26 07:52:53.939: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:55.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:55.852: INFO: rc: 7 Nov 26 07:52:55.852: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:57.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:57.900: INFO: rc: 7 Nov 26 07:52:57.900: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:52:59.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:52:59.914: INFO: rc: 7 Nov 26 07:52:59.914: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:01.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:01.855: INFO: rc: 7 Nov 26 07:53:01.855: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:03.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:03.928: INFO: rc: 7 Nov 26 07:53:03.928: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 15m41.614s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 15m41.043s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 2m9.922s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:53:05.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:05.849: INFO: rc: 7 Nov 26 07:53:05.849: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:07.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:07.938: INFO: rc: 7 Nov 26 07:53:07.938: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:09.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:09.852: INFO: rc: 7 Nov 26 07:53:09.852: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:11.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:11.852: INFO: rc: 7 Nov 26 07:53:11.852: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:13.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:13.872: INFO: rc: 7 Nov 26 07:53:13.872: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:15.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:15.915: INFO: rc: 7 Nov 26 07:53:15.915: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:17.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:17.921: INFO: rc: 7 Nov 26 07:53:17.921: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:19.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:19.884: INFO: rc: 7 Nov 26 07:53:19.884: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:21.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:21.856: INFO: rc: 7 Nov 26 07:53:21.856: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:23.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:23.865: INFO: rc: 7 Nov 26 07:53:23.865: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 16m1.616s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 16m1.046s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 2m29.924s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:53:25.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:25.847: INFO: rc: 7 Nov 26 07:53:25.847: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:27.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:27.911: INFO: rc: 7 Nov 26 07:53:27.911: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:29.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:29.864: INFO: rc: 7 Nov 26 07:53:29.864: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:31.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:31.859: INFO: rc: 7 Nov 26 07:53:31.859: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:33.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:33.862: INFO: rc: 7 Nov 26 07:53:33.862: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:35.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:35.853: INFO: rc: 7 Nov 26 07:53:35.853: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:37.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:37.890: INFO: rc: 7 Nov 26 07:53:37.890: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:39.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:39.854: INFO: rc: 7 Nov 26 07:53:39.854: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:41.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:41.849: INFO: rc: 7 Nov 26 07:53:41.849: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:43.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:43.866: INFO: rc: 7 Nov 26 07:53:43.866: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 16m21.619s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 16m21.048s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 2m49.927s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:53:45.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:45.853: INFO: rc: 7 Nov 26 07:53:45.853: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:47.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:47.922: INFO: rc: 7 Nov 26 07:53:47.922: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:49.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:49.854: INFO: rc: 7 Nov 26 07:53:49.854: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:51.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:51.854: INFO: rc: 7 Nov 26 07:53:51.854: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:53.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:53.861: INFO: rc: 7 Nov 26 07:53:53.861: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:55.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:55.861: INFO: rc: 7 Nov 26 07:53:55.861: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:57.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:57.897: INFO: rc: 7 Nov 26 07:53:57.898: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:53:59.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:53:59.874: INFO: rc: 7 Nov 26 07:53:59.874: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:01.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:01.856: INFO: rc: 7 Nov 26 07:54:01.857: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:03.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:03.862: INFO: rc: 7 Nov 26 07:54:03.862: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 16m41.621s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 16m41.05s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 3m9.929s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:54:05.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:05.858: INFO: rc: 7 Nov 26 07:54:05.858: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:07.343: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:07.902: INFO: rc: 7 Nov 26 07:54:07.902: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:09.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:09.856: INFO: rc: 7 Nov 26 07:54:09.856: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:11.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:11.854: INFO: rc: 7 Nov 26 07:54:11.854: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:13.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:13.871: INFO: rc: 7 Nov 26 07:54:13.871: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:15.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:15.871: INFO: rc: 7 Nov 26 07:54:15.871: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:17.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:17.902: INFO: rc: 7 Nov 26 07:54:17.902: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:19.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:19.868: INFO: rc: 7 Nov 26 07:54:19.869: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:21.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:21.847: INFO: rc: 7 Nov 26 07:54:21.847: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:23.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:23.868: INFO: rc: 7 Nov 26 07:54:23.868: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 17m1.623s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 17m1.053s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 3m29.931s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:54:25.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:25.869: INFO: rc: 7 Nov 26 07:54:25.869: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:27.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:27.897: INFO: rc: 7 Nov 26 07:54:27.897: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:29.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:29.857: INFO: rc: 7 Nov 26 07:54:29.857: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:31.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:31.889: INFO: rc: 7 Nov 26 07:54:31.889: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:33.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:33.863: INFO: rc: 7 Nov 26 07:54:33.863: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:35.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:35.852: INFO: rc: 7 Nov 26 07:54:35.852: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:37.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:37.901: INFO: rc: 7 Nov 26 07:54:37.901: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:39.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:39.871: INFO: rc: 7 Nov 26 07:54:39.871: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:41.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:41.881: INFO: rc: 7 Nov 26 07:54:41.881: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:43.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:43.883: INFO: rc: 7 Nov 26 07:54:43.883: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 17m21.626s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 17m21.055s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 3m49.934s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:54:45.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:45.847: INFO: rc: 7 Nov 26 07:54:45.847: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:47.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:47.902: INFO: rc: 7 Nov 26 07:54:47.903: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:49.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:49.850: INFO: rc: 7 Nov 26 07:54:49.850: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:51.343: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:51.869: INFO: rc: 7 Nov 26 07:54:51.869: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:53.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:53.868: INFO: rc: 7 Nov 26 07:54:53.868: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:55.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:55.876: INFO: rc: 7 Nov 26 07:54:55.876: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:57.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:57.903: INFO: rc: 7 Nov 26 07:54:57.903: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:54:59.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:54:59.878: INFO: rc: 7 Nov 26 07:54:59.878: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:01.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:01.856: INFO: rc: 7 Nov 26 07:55:01.856: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:03.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:03.860: INFO: rc: 7 Nov 26 07:55:03.860: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 17m41.629s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 17m41.059s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 4m9.937s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:55:05.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:05.857: INFO: rc: 7 Nov 26 07:55:05.857: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:07.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:07.895: INFO: rc: 7 Nov 26 07:55:07.895: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:09.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:09.856: INFO: rc: 7 Nov 26 07:55:09.856: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:11.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:11.855: INFO: rc: 7 Nov 26 07:55:11.855: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:13.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:13.862: INFO: rc: 7 Nov 26 07:55:13.862: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:15.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:16.108: INFO: rc: 7 Nov 26 07:55:16.108: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:17.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:17.900: INFO: rc: 7 Nov 26 07:55:17.900: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:19.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:19.927: INFO: rc: 7 Nov 26 07:55:19.927: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:21.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:21.861: INFO: rc: 7 Nov 26 07:55:21.861: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:23.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:23.874: INFO: rc: 7 Nov 26 07:55:23.874: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 18m1.632s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 18m1.061s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 4m29.94s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:55:25.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:25.879: INFO: rc: 7 Nov 26 07:55:25.879: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:27.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:27.905: INFO: rc: 7 Nov 26 07:55:27.905: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:29.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:29.853: INFO: rc: 7 Nov 26 07:55:29.853: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:31.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:31.854: INFO: rc: 7 Nov 26 07:55:31.854: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:33.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:33.857: INFO: rc: 7 Nov 26 07:55:33.857: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:35.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:35.991: INFO: rc: 7 Nov 26 07:55:35.991: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:37.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:55:37.968: INFO: rc: 7 Nov 26 07:55:37.968: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:55:39.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 18m21.634s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 18m21.063s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 4m49.942s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000c7a420?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc004d63f50?, 0x1?}, {0xc0053b5ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 18m41.638s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 18m41.067s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 5m9.946s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000c7a420?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc004d63f50?, 0x1?}, {0xc0053b5ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:56:04.831: INFO: rc: 7 Nov 26 07:56:04.831: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:05.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:05.858: INFO: rc: 7 Nov 26 07:56:05.858: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:07.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:07.846: INFO: rc: 7 Nov 26 07:56:07.846: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:09.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:09.962: INFO: rc: 7 Nov 26 07:56:09.962: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:11.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:11.930: INFO: rc: 7 Nov 26 07:56:11.930: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:13.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:13.870: INFO: rc: 7 Nov 26 07:56:13.870: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:15.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:15.855: INFO: rc: 7 Nov 26 07:56:15.855: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:17.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:17.855: INFO: rc: 7 Nov 26 07:56:17.855: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:19.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:19.884: INFO: rc: 7 Nov 26 07:56:19.884: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:21.343: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:21.852: INFO: rc: 7 Nov 26 07:56:21.852: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:23.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:23.860: INFO: rc: 7 Nov 26 07:56:23.860: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 19m1.64s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 19m1.07s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 5m29.948s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:56:25.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:25.845: INFO: rc: 7 Nov 26 07:56:25.845: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:27.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:27.850: INFO: rc: 7 Nov 26 07:56:27.850: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:29.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:29.882: INFO: rc: 7 Nov 26 07:56:29.882: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:31.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:31.850: INFO: rc: 7 Nov 26 07:56:31.850: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:33.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:33.874: INFO: rc: 7 Nov 26 07:56:33.874: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:35.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:35.855: INFO: rc: 7 Nov 26 07:56:35.855: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:37.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:37.862: INFO: rc: 7 Nov 26 07:56:37.862: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:39.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:56:39.859: INFO: rc: 7 Nov 26 07:56:39.859: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 07:56:41.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 19m21.643s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 19m21.073s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 5m49.951s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc00376e9a0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc004d63f50?, 0x1?}, {0xc0053b5ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 19m41.647s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 19m41.076s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 6m9.955s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc00376e9a0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc004d63f50?, 0x1?}, {0xc0053b5ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:57:10.487: INFO: rc: 1 Nov 26 07:57:10.487: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:11.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:11.760: INFO: rc: 1 Nov 26 07:57:11.760: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:13.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:13.883: INFO: rc: 1 Nov 26 07:57:13.883: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:15.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:15.745: INFO: rc: 1 Nov 26 07:57:15.745: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:17.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:17.815: INFO: rc: 1 Nov 26 07:57:17.815: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:19.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:19.755: INFO: rc: 1 Nov 26 07:57:19.755: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:21.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:21.731: INFO: rc: 1 Nov 26 07:57:21.731: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:23.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:23.860: INFO: rc: 1 Nov 26 07:57:23.860: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 20m1.65s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 20m1.079s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 6m29.958s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:57:25.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:25.855: INFO: rc: 1 Nov 26 07:57:25.855: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:57:27.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:27.927: INFO: rc: 1 Nov 26 07:57:27.927: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:57:29.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:29.878: INFO: rc: 1 Nov 26 07:57:29.878: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:57:31.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:31.875: INFO: rc: 1 Nov 26 07:57:31.875: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:57:33.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:33.841: INFO: rc: 1 Nov 26 07:57:33.841: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:57:35.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:35.824: INFO: rc: 1 Nov 26 07:57:35.824: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:57:37.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:37.952: INFO: rc: 1 Nov 26 07:57:37.952: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:57:39.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:39.910: INFO: rc: 1 Nov 26 07:57:39.910: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:57:41.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:41.800: INFO: rc: 1 Nov 26 07:57:41.800: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:57:43.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:44.300: INFO: rc: 1 Nov 26 07:57:44.300: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 20m21.652s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 20m21.082s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 6m49.96s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:57:45.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:45.814: INFO: rc: 1 Nov 26 07:57:45.814: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 07:57:47.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:47.800: INFO: rc: 1 Nov 26 07:57:47.800: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:49.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:49.804: INFO: rc: 1 Nov 26 07:57:49.804: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:51.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:51.763: INFO: rc: 1 Nov 26 07:57:51.763: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:53.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:53.786: INFO: rc: 1 Nov 26 07:57:53.786: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:55.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:55.838: INFO: rc: 1 Nov 26 07:57:55.838: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:57.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:57.780: INFO: rc: 1 Nov 26 07:57:57.780: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:57:59.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:57:59.816: INFO: rc: 1 Nov 26 07:57:59.816: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:01.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:01.735: INFO: rc: 1 Nov 26 07:58:01.735: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:03.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:03.797: INFO: rc: 1 Nov 26 07:58:03.798: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 20m41.655s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 20m41.084s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 7m9.963s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:58:05.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:05.728: INFO: rc: 1 Nov 26 07:58:05.728: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:07.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:07.792: INFO: rc: 1 Nov 26 07:58:07.792: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:09.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:09.756: INFO: rc: 1 Nov 26 07:58:09.756: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:11.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:11.782: INFO: rc: 1 Nov 26 07:58:11.782: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:13.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:13.858: INFO: rc: 1 Nov 26 07:58:13.858: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:15.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:15.728: INFO: rc: 1 Nov 26 07:58:15.728: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:17.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:17.763: INFO: rc: 1 Nov 26 07:58:17.763: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:19.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:19.815: INFO: rc: 1 Nov 26 07:58:19.815: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:21.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:21.724: INFO: rc: 1 Nov 26 07:58:21.724: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:23.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:23.874: INFO: rc: 1 Nov 26 07:58:23.874: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 21m1.658s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 21m1.087s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 7m29.966s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:58:25.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:25.813: INFO: rc: 1 Nov 26 07:58:25.813: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:27.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:27.767: INFO: rc: 1 Nov 26 07:58:27.767: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:29.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:29.791: INFO: rc: 1 Nov 26 07:58:29.791: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:31.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:31.739: INFO: rc: 1 Nov 26 07:58:31.739: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:33.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:33.993: INFO: rc: 1 Nov 26 07:58:33.993: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:35.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:35.730: INFO: rc: 1 Nov 26 07:58:35.730: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:37.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:37.860: INFO: rc: 1 Nov 26 07:58:37.860: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:39.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:39.802: INFO: rc: 1 Nov 26 07:58:39.802: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:41.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:41.796: INFO: rc: 1 Nov 26 07:58:41.796: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:43.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:43.952: INFO: rc: 1 Nov 26 07:58:43.952: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 21m21.661s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 21m21.091s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 7m49.969s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:58:45.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:45.802: INFO: rc: 1 Nov 26 07:58:45.802: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:47.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:47.812: INFO: rc: 1 Nov 26 07:58:47.812: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:49.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:49.993: INFO: rc: 1 Nov 26 07:58:49.993: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:51.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:51.750: INFO: rc: 1 Nov 26 07:58:51.750: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:53.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:53.882: INFO: rc: 1 Nov 26 07:58:53.882: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:55.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:55.746: INFO: rc: 1 Nov 26 07:58:55.746: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:57.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:57.777: INFO: rc: 1 Nov 26 07:58:57.777: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:58:59.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:58:59.787: INFO: rc: 1 Nov 26 07:58:59.787: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:01.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:01.825: INFO: rc: 1 Nov 26 07:59:01.825: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:03.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:03.846: INFO: rc: 1 Nov 26 07:59:03.846: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 21m41.664s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 21m41.094s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 8m9.972s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:59:05.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:05.791: INFO: rc: 1 Nov 26 07:59:05.791: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:07.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:08.018: INFO: rc: 1 Nov 26 07:59:08.018: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:09.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:09.848: INFO: rc: 1 Nov 26 07:59:09.848: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:11.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:11.818: INFO: rc: 1 Nov 26 07:59:11.818: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:13.343: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:13.783: INFO: rc: 1 Nov 26 07:59:13.783: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:15.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:15.734: INFO: rc: 1 Nov 26 07:59:15.734: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:17.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:17.818: INFO: rc: 1 Nov 26 07:59:17.818: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:19.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:19.783: INFO: rc: 1 Nov 26 07:59:19.783: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:21.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:21.764: INFO: rc: 1 Nov 26 07:59:21.764: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:23.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:23.806: INFO: rc: 1 Nov 26 07:59:23.806: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 22m1.667s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 22m1.096s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 8m29.975s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:59:25.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:25.753: INFO: rc: 1 Nov 26 07:59:25.753: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:27.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:27.904: INFO: rc: 1 Nov 26 07:59:27.904: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:29.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:29.840: INFO: rc: 1 Nov 26 07:59:29.840: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:31.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:31.801: INFO: rc: 1 Nov 26 07:59:31.801: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:33.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:33.765: INFO: rc: 1 Nov 26 07:59:33.765: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:35.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:35.723: INFO: rc: 1 Nov 26 07:59:35.723: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:37.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 07:59:37.758: INFO: rc: 1 Nov 26 07:59:37.758: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 07:59:39.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 22m21.669s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 22m21.099s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 8m49.977s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc002dde2c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc004d63f50?, 0x1?}, {0xc0053b5ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 22m41.672s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 22m41.101s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 9m9.98s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc002dde2c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc004d63f50?, 0x1?}, {0xc0053b5ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 08:00:09.735: INFO: rc: 1 Nov 26 08:00:09.735: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" error: exit status 1, retry until timeout Nov 26 08:00:11.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 23m1.675s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 23m1.104s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 9m29.983s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc002dde420?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc004d63f50?, 0x1?}, {0xc0053b5ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 08:00:39.506: INFO: rc: 1 Nov 26 08:00:39.506: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 08:00:41.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 08:00:41.741: INFO: rc: 1 Nov 26 08:00:41.741: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 08:00:43.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 08:00:43.835: INFO: rc: 1 Nov 26 08:00:43.835: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 23m21.679s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 23m21.108s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 9m49.987s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 08:00:45.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' Nov 26 08:00:45.756: INFO: rc: 1 Nov 26 08:00:45.756: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 08:00:47.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 23m41.681s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 23m41.111s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 10m9.989s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc002dde840?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc004d63f50?, 0x1?}, {0xc0053b5ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 08:01:18.071: INFO: rc: 28 Nov 26 08:01:18.071: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.105.76.113:80/clientip command terminated with exit code 28 error: exit status 28, retry until timeout Nov 26 08:01:18.071: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip' ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 24m1.683s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 24m1.113s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 10m29.992s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select, 2 minutes] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000e2e580?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc004d63f50?, 0x1?}, {0xc0053b5ad8?, 0x100000020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 24m21.687s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 24m21.117s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.105.76.113 from pod pause-pod-deployment-5d788b4b5-r89pj on node bootstrap-e2e-minion-group-zhjw (Step Runtime: 10m49.995s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 3869 [select, 2 minutes] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000e2e580?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc004d63f50?, 0x1?}, {0xc0053b5ad8?, 0x100000020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00302b818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0053b5d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011d5800?, 0x77?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003581680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 08:01:44.784: INFO: rc: 1 Nov 26 08:01:44.784: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 exec pause-pod-deployment-5d788b4b5-r89pj -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.105.76.113:80/clientip: Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-pause") error: exit status 1, retry until timeout Nov 26 08:01:44.785: FAIL: Source IP not preserved from pause-pod-deployment-5d788b4b5-r89pj, expected '10.64.2.16' got '' Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1476 +0xabd Nov 26 08:01:44.785: INFO: Deleting deployment Nov 26 08:01:46.608: INFO: Waiting up to 15m0s for service "external-local-pods" to have no LoadBalancer [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 08:01:56.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 08:01:56.961: INFO: Output of kubectl describe svc: Nov 26 08:01:56.961: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=esipp-5833 describe svc --namespace=esipp-5833' Nov 26 08:01:57.290: INFO: stderr: "" Nov 26 08:01:57.290: INFO: stdout: "Name: external-local-pods\nNamespace: esipp-5833\nLabels: testid=external-local-pods-2c23fd8d-33bc-40ef-a4b5-c7a4194b59bd\nAnnotations: <none>\nSelector: testid=external-local-pods-2c23fd8d-33bc-40ef-a4b5-c7a4194b59bd\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.245.167\nIPs: 10.0.245.167\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nEndpoints: \nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 17m service-controller Ensuring load balancer\n Normal EnsuringLoadBalancer 11m service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 11m service-controller Ensured load balancer\n Normal EnsuringLoadBalancer 5m16s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 5m12s service-controller Ensured load balancer\n" Nov 26 08:01:57.290: INFO: Name: external-local-pods Namespace: esipp-5833 Labels: testid=external-local-pods-2c23fd8d-33bc-40ef-a4b5-c7a4194b59bd Annotations: <none> Selector: testid=external-local-pods-2c23fd8d-33bc-40ef-a4b5-c7a4194b59bd Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.245.167 IPs: 10.0.245.167 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 17m service-controller Ensuring load balancer Normal EnsuringLoadBalancer 11m service-controller Ensuring load balancer Normal EnsuredLoadBalancer 11m service-controller Ensured load balancer Normal EnsuringLoadBalancer 5m16s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 5m12s service-controller Ensured load balancer [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 08:01:57.291 STEP: Collecting events from namespace "esipp-5833". 11/26/22 08:01:57.291 STEP: Found 23 events. 11/26/22 08:01:57.334 Nov 26 08:01:57.335: INFO: At 2022-11-26 07:44:35 +0000 UTC - event for external-local-pods: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:29 +0000 UTC - event for external-local-pods: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:48 +0000 UTC - event for external-local-pods: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:49 +0000 UTC - event for external-local-pods: {replication-controller } SuccessfulCreate: Created pod: external-local-pods-nhgcw Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:49 +0000 UTC - event for external-local-pods-nhgcw: {default-scheduler } Scheduled: Successfully assigned esipp-5833/external-local-pods-nhgcw to bootstrap-e2e-minion-group-zhjw Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:50 +0000 UTC - event for external-local-pods-nhgcw: {kubelet bootstrap-e2e-minion-group-zhjw} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:50 +0000 UTC - event for external-local-pods-nhgcw: {kubelet bootstrap-e2e-minion-group-zhjw} Created: Created container netexec Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:50 +0000 UTC - event for external-local-pods-nhgcw: {kubelet bootstrap-e2e-minion-group-zhjw} Started: Started container netexec Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:52 +0000 UTC - event for external-local-pods-nhgcw: {kubelet bootstrap-e2e-minion-group-zhjw} Killing: Stopping container netexec Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:52 +0000 UTC - event for pause-pod-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set pause-pod-deployment-5d788b4b5 to 1 Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:52 +0000 UTC - event for pause-pod-deployment-5d788b4b5: {replicaset-controller } SuccessfulCreate: Created pod: pause-pod-deployment-5d788b4b5-r89pj Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:52 +0000 UTC - event for pause-pod-deployment-5d788b4b5-r89pj: {default-scheduler } Scheduled: Successfully assigned esipp-5833/pause-pod-deployment-5d788b4b5-r89pj to bootstrap-e2e-minion-group-zhjw Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:53 +0000 UTC - event for external-local-pods-nhgcw: {kubelet bootstrap-e2e-minion-group-zhjw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:53 +0000 UTC - event for pause-pod-deployment-5d788b4b5-r89pj: {kubelet bootstrap-e2e-minion-group-zhjw} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:53 +0000 UTC - event for pause-pod-deployment-5d788b4b5-r89pj: {kubelet bootstrap-e2e-minion-group-zhjw} Created: Created container agnhost-pause Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:53 +0000 UTC - event for pause-pod-deployment-5d788b4b5-r89pj: {kubelet bootstrap-e2e-minion-group-zhjw} Started: Started container agnhost-pause Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:57 +0000 UTC - event for external-local-pods-nhgcw: {kubelet bootstrap-e2e-minion-group-zhjw} BackOff: Back-off restarting failed container netexec in pod external-local-pods-nhgcw_esipp-5833(8a02f0f7-3db0-4ff4-8493-55652006fad7) Nov 26 08:01:57.335: INFO: At 2022-11-26 07:50:57 +0000 UTC - event for external-local-pods-nhgcw: {kubelet bootstrap-e2e-minion-group-zhjw} Unhealthy: Readiness probe failed: Get "http://10.64.2.17:80/hostName": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 08:01:57.335: INFO: At 2022-11-26 07:52:11 +0000 UTC - event for pause-pod-deployment-5d788b4b5-r89pj: {kubelet bootstrap-e2e-minion-group-zhjw} Killing: Stopping container agnhost-pause Nov 26 08:01:57.335: INFO: At 2022-11-26 07:52:12 +0000 UTC - event for pause-pod-deployment-5d788b4b5-r89pj: {kubelet bootstrap-e2e-minion-group-zhjw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 08:01:57.335: INFO: At 2022-11-26 07:52:15 +0000 UTC - event for pause-pod-deployment-5d788b4b5-r89pj: {kubelet bootstrap-e2e-minion-group-zhjw} BackOff: Back-off restarting failed container agnhost-pause in pod pause-pod-deployment-5d788b4b5-r89pj_esipp-5833(315fa6ed-2add-4e29-a607-49b19ab9a9eb) Nov 26 08:01:57.335: INFO: At 2022-11-26 07:56:41 +0000 UTC - event for external-local-pods: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 08:01:57.335: INFO: At 2022-11-26 07:56:45 +0000 UTC - event for external-local-pods: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 08:01:57.377: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 08:01:57.377: INFO: external-local-pods-nhgcw bootstrap-e2e-minion-group-zhjw Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:50:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:59:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:59:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:50:49 +0000 UTC }] Nov 26 08:01:57.377: INFO: pause-pod-deployment-5d788b4b5-r89pj bootstrap-e2e-minion-group-zhjw Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:50:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 08:01:30 +0000 UTC ContainersNotReady containers with unready status: [agnhost-pause]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 08:01:30 +0000 UTC ContainersNotReady containers with unready status: [agnhost-pause]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:50:52 +0000 UTC }] Nov 26 08:01:57.377: INFO: Nov 26 08:01:57.530: INFO: Logging node info for node bootstrap-e2e-master Nov 26 08:01:57.573: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master f12dfba9-8340-4384-a012-464bb8ff014b 15632 0 2022-11-26 07:14:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 07:58:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:58:13 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:58:13 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:58:13 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:58:13 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.127.104.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4341b6df721ee06de14317c6e64c7913,SystemUUID:4341b6df-721e-e06d-e143-17c6e64c7913,BootID:0fd660c7-349c-4c78-8001-012f07790551,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 08:01:57.573: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 08:01:57.633: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 08:01:57.699: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 07:13:37 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:57.699: INFO: Container konnectivity-server-container ready: true, restart count 6 Nov 26 08:01:57.699: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 07:13:37 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:57.699: INFO: Container kube-apiserver ready: true, restart count 4 Nov 26 08:01:57.699: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 07:13:53 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:57.699: INFO: Container kube-addon-manager ready: true, restart count 2 Nov 26 08:01:57.699: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 07:13:56 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:57.699: INFO: Container kube-controller-manager ready: false, restart count 9 Nov 26 08:01:57.699: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 07:13:37 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:57.699: INFO: Container etcd-container ready: true, restart count 2 Nov 26 08:01:57.699: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 07:13:37 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:57.699: INFO: Container etcd-container ready: true, restart count 5 Nov 26 08:01:57.699: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 07:13:37 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:57.699: INFO: Container kube-scheduler ready: false, restart count 8 Nov 26 08:01:57.699: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 07:13:53 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:57.699: INFO: Container l7-lb-controller ready: true, restart count 12 Nov 26 08:01:57.699: INFO: metadata-proxy-v0.1-f9lfz started at 2022-11-26 07:14:27 +0000 UTC (0+2 container statuses recorded) Nov 26 08:01:57.699: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 08:01:57.699: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 08:01:57.872: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 08:01:57.872: INFO: Logging node info for node bootstrap-e2e-minion-group-svrn Nov 26 08:01:57.915: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-svrn 0b46f31f-d25c-4604-ba86-b3e98c09449d 16107 0 2022-11-26 07:14:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-svrn kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-svrn topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-3878":"bootstrap-e2e-minion-group-svrn","csi-mock-csi-mock-volumes-5988":"bootstrap-e2e-minion-group-svrn"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 07:38:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 07:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 08:01:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-svrn,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:58:09 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:58:09 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:58:09 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:58:09 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:58:09 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:58:09 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:58:09 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:57:20 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:57:20 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:57:20 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:57:20 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.23.98,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-svrn.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-svrn.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a792d55bc5ad5cdad144cb5b4dfa29f,SystemUUID:6a792d55-bc5a-d5cd-ad14-4cb5b4dfa29f,BootID:d19434b3-94eb-452d-a279-fc84362b7cab,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5988^133ed1f7-6d5d-11ed-8921-d2d874b08a41],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 08:01:57.916: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-svrn Nov 26 08:01:57.965: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-svrn Nov 26 08:01:58.075: INFO: metadata-proxy-v0.1-hbvvs started at 2022-11-26 07:14:31 +0000 UTC (0+2 container statuses recorded) Nov 26 08:01:58.075: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 08:01:58.075: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 08:01:58.075: INFO: back-off-cap started at 2022-11-26 07:28:02 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.075: INFO: Container back-off-cap ready: false, restart count 11 Nov 26 08:01:58.075: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:32:07 +0000 UTC (0+7 container statuses recorded) Nov 26 08:01:58.075: INFO: Container csi-attacher ready: false, restart count 9 Nov 26 08:01:58.075: INFO: Container csi-provisioner ready: false, restart count 9 Nov 26 08:01:58.075: INFO: Container csi-resizer ready: false, restart count 9 Nov 26 08:01:58.075: INFO: Container csi-snapshotter ready: false, restart count 9 Nov 26 08:01:58.075: INFO: Container hostpath ready: false, restart count 9 Nov 26 08:01:58.075: INFO: Container liveness-probe ready: false, restart count 9 Nov 26 08:01:58.075: INFO: Container node-driver-registrar ready: false, restart count 9 Nov 26 08:01:58.075: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:32:32 +0000 UTC (0+7 container statuses recorded) Nov 26 08:01:58.075: INFO: Container csi-attacher ready: true, restart count 9 Nov 26 08:01:58.075: INFO: Container csi-provisioner ready: true, restart count 9 Nov 26 08:01:58.075: INFO: Container csi-resizer ready: true, restart count 9 Nov 26 08:01:58.075: INFO: Container csi-snapshotter ready: true, restart count 9 Nov 26 08:01:58.075: INFO: Container hostpath ready: true, restart count 9 Nov 26 08:01:58.075: INFO: Container liveness-probe ready: true, restart count 9 Nov 26 08:01:58.075: INFO: Container node-driver-registrar ready: true, restart count 9 Nov 26 08:01:58.075: INFO: csi-mockplugin-0 started at 2022-11-26 07:36:34 +0000 UTC (0+3 container statuses recorded) Nov 26 08:01:58.075: INFO: Container csi-provisioner ready: true, restart count 7 Nov 26 08:01:58.075: INFO: Container driver-registrar ready: true, restart count 7 Nov 26 08:01:58.075: INFO: Container mock ready: true, restart count 7 Nov 26 08:01:58.075: INFO: kube-proxy-bootstrap-e2e-minion-group-svrn started at 2022-11-26 07:14:30 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.075: INFO: Container kube-proxy ready: true, restart count 11 Nov 26 08:01:58.075: INFO: coredns-6d97d5ddb-znrwb started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.075: INFO: Container coredns ready: true, restart count 13 Nov 26 08:01:58.075: INFO: volume-snapshot-controller-0 started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.075: INFO: Container volume-snapshot-controller ready: true, restart count 11 Nov 26 08:01:58.075: INFO: pod-subpath-test-inlinevolume-zshr started at 2022-11-26 07:17:36 +0000 UTC (1+2 container statuses recorded) Nov 26 08:01:58.075: INFO: Init container init-volume-inlinevolume-zshr ready: true, restart count 10 Nov 26 08:01:58.075: INFO: Container test-container-subpath-inlinevolume-zshr ready: false, restart count 11 Nov 26 08:01:58.075: INFO: Container test-container-volume-inlinevolume-zshr ready: false, restart count 11 Nov 26 08:01:58.075: INFO: kube-dns-autoscaler-5f6455f985-4pppz started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.075: INFO: Container autoscaler ready: false, restart count 10 Nov 26 08:01:58.075: INFO: hostexec-bootstrap-e2e-minion-group-svrn-ndxqc started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.075: INFO: Container agnhost-container ready: false, restart count 8 Nov 26 08:01:58.075: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:29:08 +0000 UTC (0+7 container statuses recorded) Nov 26 08:01:58.075: INFO: Container csi-attacher ready: false, restart count 8 Nov 26 08:01:58.075: INFO: Container csi-provisioner ready: false, restart count 8 Nov 26 08:01:58.075: INFO: Container csi-resizer ready: false, restart count 8 Nov 26 08:01:58.075: INFO: Container csi-snapshotter ready: false, restart count 8 Nov 26 08:01:58.075: INFO: Container hostpath ready: false, restart count 8 Nov 26 08:01:58.075: INFO: Container liveness-probe ready: false, restart count 8 Nov 26 08:01:58.075: INFO: Container node-driver-registrar ready: false, restart count 8 Nov 26 08:01:58.075: INFO: pvc-volume-tester-hdf97 started at 2022-11-26 07:36:46 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.075: INFO: Container volume-tester ready: false, restart count 0 Nov 26 08:01:58.075: INFO: netserver-0 started at 2022-11-26 07:37:35 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.075: INFO: Container webserver ready: false, restart count 7 Nov 26 08:01:58.075: INFO: l7-default-backend-8549d69d99-fz66r started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.075: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 08:01:58.075: INFO: konnectivity-agent-59kfk started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.075: INFO: Container konnectivity-agent ready: true, restart count 11 Nov 26 08:01:58.075: INFO: pod-e251ec22-6288-4cf2-a290-a063e3c72c06 started at 2022-11-26 07:17:00 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.075: INFO: Container write-pod ready: false, restart count 0 Nov 26 08:01:58.075: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:29:21 +0000 UTC (0+7 container statuses recorded) Nov 26 08:01:58.075: INFO: Container csi-attacher ready: false, restart count 7 Nov 26 08:01:58.075: INFO: Container csi-provisioner ready: false, restart count 7 Nov 26 08:01:58.075: INFO: Container csi-resizer ready: false, restart count 7 Nov 26 08:01:58.075: INFO: Container csi-snapshotter ready: false, restart count 7 Nov 26 08:01:58.075: INFO: Container hostpath ready: false, restart count 7 Nov 26 08:01:58.075: INFO: Container liveness-probe ready: false, restart count 7 Nov 26 08:01:58.075: INFO: Container node-driver-registrar ready: false, restart count 7 Nov 26 08:01:58.075: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:38:12 +0000 UTC (0+7 container statuses recorded) Nov 26 08:01:58.075: INFO: Container csi-attacher ready: false, restart count 7 Nov 26 08:01:58.075: INFO: Container csi-provisioner ready: false, restart count 7 Nov 26 08:01:58.075: INFO: Container csi-resizer ready: false, restart count 7 Nov 26 08:01:58.075: INFO: Container csi-snapshotter ready: false, restart count 7 Nov 26 08:01:58.075: INFO: Container hostpath ready: false, restart count 7 Nov 26 08:01:58.075: INFO: Container liveness-probe ready: false, restart count 7 Nov 26 08:01:58.075: INFO: Container node-driver-registrar ready: false, restart count 7 Nov 26 08:01:58.292: INFO: Latency metrics for node bootstrap-e2e-minion-group-svrn Nov 26 08:01:58.292: INFO: Logging node info for node bootstrap-e2e-minion-group-v6kp Nov 26 08:01:58.336: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-v6kp 1b4c00d7-9f80-4c8f-bcb4-5fdf079da6d6 16120 0 2022-11-26 07:14:26 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-v6kp kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-v6kp topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8553":"bootstrap-e2e-minion-group-v6kp","csi-mock-csi-mock-volumes-4257":"bootstrap-e2e-minion-group-v6kp"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 07:56:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 07:58:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 08:01:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-v6kp,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:58:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:58:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:58:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:58:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:58:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:58:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:58:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:57:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:57:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:57:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:57:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.227.156.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-v6kp.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-v6kp.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:35b699b12f5019228f1e2e38d963976d,SystemUUID:35b699b1-2f50-1922-8f1e-2e38d963976d,BootID:5793a9ad-d1f5-4512-925a-2b321cb699ee,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-2652^5b9d621e-6d5a-11ed-bfab-ae8588c81627 kubernetes.io/csi/csi-hostpath-provisioning-4171^1732c9d0-6d5c-11ed-b59c-a2ff331b1a4f],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4171^1732c9d0-6d5c-11ed-b59c-a2ff331b1a4f,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2652^5b9d621e-6d5a-11ed-bfab-ae8588c81627,DevicePath:,},},Config:nil,},} Nov 26 08:01:58.336: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-v6kp Nov 26 08:01:58.384: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-v6kp Nov 26 08:01:58.490: INFO: metadata-proxy-v0.1-7k4s6 started at 2022-11-26 07:14:27 +0000 UTC (0+2 container statuses recorded) Nov 26 08:01:58.490: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 08:01:58.490: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 08:01:58.490: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-hrstr started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container agnhost-container ready: true, restart count 8 Nov 26 08:01:58.490: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:32:39 +0000 UTC (0+7 container statuses recorded) Nov 26 08:01:58.490: INFO: Container csi-attacher ready: false, restart count 7 Nov 26 08:01:58.490: INFO: Container csi-provisioner ready: false, restart count 7 Nov 26 08:01:58.490: INFO: Container csi-resizer ready: false, restart count 7 Nov 26 08:01:58.490: INFO: Container csi-snapshotter ready: false, restart count 7 Nov 26 08:01:58.490: INFO: Container hostpath ready: false, restart count 7 Nov 26 08:01:58.490: INFO: Container liveness-probe ready: false, restart count 7 Nov 26 08:01:58.490: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 26 08:01:58.490: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-dqt4r started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container agnhost-container ready: false, restart count 8 Nov 26 08:01:58.490: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-lfftx started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container agnhost-container ready: true, restart count 7 Nov 26 08:01:58.490: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-bkzbv started at 2022-11-26 07:17:32 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container agnhost-container ready: false, restart count 11 Nov 26 08:01:58.490: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 07:26:48 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container csi-attacher ready: true, restart count 8 Nov 26 08:01:58.490: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-dq8fq started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container agnhost-container ready: false, restart count 9 Nov 26 08:01:58.490: INFO: pod-configmaps-601d851b-9baa-4ba4-939b-2d8ceb3ae50c started at 2022-11-26 07:29:25 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 08:01:58.490: INFO: pod-subpath-test-dynamicpv-z58q started at 2022-11-26 07:31:41 +0000 UTC (1+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Init container init-volume-dynamicpv-z58q ready: true, restart count 0 Nov 26 08:01:58.490: INFO: Container test-container-subpath-dynamicpv-z58q ready: false, restart count 0 Nov 26 08:01:58.490: INFO: coredns-6d97d5ddb-k477c started at 2022-11-26 07:14:49 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container coredns ready: false, restart count 12 Nov 26 08:01:58.490: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-w7jkx started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container agnhost-container ready: false, restart count 9 Nov 26 08:01:58.490: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-4dj2d started at 2022-11-26 07:17:10 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container agnhost-container ready: false, restart count 9 Nov 26 08:01:58.490: INFO: pod-subpath-test-preprovisionedpv-5228 started at 2022-11-26 07:17:15 +0000 UTC (1+2 container statuses recorded) Nov 26 08:01:58.490: INFO: Init container init-volume-preprovisionedpv-5228 ready: true, restart count 4 Nov 26 08:01:58.490: INFO: Container test-container-subpath-preprovisionedpv-5228 ready: true, restart count 9 Nov 26 08:01:58.490: INFO: Container test-container-volume-preprovisionedpv-5228 ready: true, restart count 9 Nov 26 08:01:58.490: INFO: hostexec-bootstrap-e2e-minion-group-v6kp-hjsww started at 2022-11-26 07:17:36 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container agnhost-container ready: true, restart count 9 Nov 26 08:01:58.490: INFO: csi-mockplugin-0 started at 2022-11-26 07:26:48 +0000 UTC (0+3 container statuses recorded) Nov 26 08:01:58.490: INFO: Container csi-provisioner ready: true, restart count 14 Nov 26 08:01:58.490: INFO: Container driver-registrar ready: true, restart count 11 Nov 26 08:01:58.490: INFO: Container mock ready: true, restart count 14 Nov 26 08:01:58.490: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:29:38 +0000 UTC (0+7 container statuses recorded) Nov 26 08:01:58.490: INFO: Container csi-attacher ready: false, restart count 9 Nov 26 08:01:58.490: INFO: Container csi-provisioner ready: false, restart count 9 Nov 26 08:01:58.490: INFO: Container csi-resizer ready: false, restart count 9 Nov 26 08:01:58.490: INFO: Container csi-snapshotter ready: false, restart count 9 Nov 26 08:01:58.490: INFO: Container hostpath ready: false, restart count 9 Nov 26 08:01:58.490: INFO: Container liveness-probe ready: false, restart count 9 Nov 26 08:01:58.490: INFO: Container node-driver-registrar ready: false, restart count 9 Nov 26 08:01:58.490: INFO: kube-proxy-bootstrap-e2e-minion-group-v6kp started at 2022-11-26 07:14:26 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container kube-proxy ready: false, restart count 11 Nov 26 08:01:58.490: INFO: pod-subpath-test-preprovisionedpv-bcww started at 2022-11-26 07:17:00 +0000 UTC (1+2 container statuses recorded) Nov 26 08:01:58.490: INFO: Init container init-volume-preprovisionedpv-bcww ready: true, restart count 0 Nov 26 08:01:58.490: INFO: Container test-container-subpath-preprovisionedpv-bcww ready: true, restart count 9 Nov 26 08:01:58.490: INFO: Container test-container-volume-preprovisionedpv-bcww ready: true, restart count 9 Nov 26 08:01:58.490: INFO: hostpathsymlink-io-client started at 2022-11-26 07:17:30 +0000 UTC (1+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Init container hostpathsymlink-io-init ready: true, restart count 0 Nov 26 08:01:58.490: INFO: Container hostpathsymlink-io-client ready: false, restart count 0 Nov 26 08:01:58.490: INFO: pod-subpath-test-dynamicpv-sbdn started at 2022-11-26 07:17:17 +0000 UTC (1+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Init container init-volume-dynamicpv-sbdn ready: true, restart count 0 Nov 26 08:01:58.490: INFO: Container test-container-subpath-dynamicpv-sbdn ready: false, restart count 0 Nov 26 08:01:58.490: INFO: konnectivity-agent-psnzt started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container konnectivity-agent ready: false, restart count 10 Nov 26 08:01:58.490: INFO: volume-prep-provisioning-6967 started at 2022-11-26 07:17:31 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container init-volume-provisioning-6967 ready: false, restart count 0 Nov 26 08:01:58.490: INFO: netserver-1 started at 2022-11-26 07:37:35 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.490: INFO: Container webserver ready: false, restart count 7 Nov 26 08:01:58.490: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:31:44 +0000 UTC (0+7 container statuses recorded) Nov 26 08:01:58.490: INFO: Container csi-attacher ready: true, restart count 7 Nov 26 08:01:58.490: INFO: Container csi-provisioner ready: true, restart count 7 Nov 26 08:01:58.490: INFO: Container csi-resizer ready: true, restart count 7 Nov 26 08:01:58.490: INFO: Container csi-snapshotter ready: true, restart count 7 Nov 26 08:01:58.490: INFO: Container hostpath ready: true, restart count 7 Nov 26 08:01:58.490: INFO: Container liveness-probe ready: true, restart count 7 Nov 26 08:01:58.490: INFO: Container node-driver-registrar ready: true, restart count 7 Nov 26 08:01:58.740: INFO: Latency metrics for node bootstrap-e2e-minion-group-v6kp Nov 26 08:01:58.740: INFO: Logging node info for node bootstrap-e2e-minion-group-zhjw Nov 26 08:01:58.783: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-zhjw 02d1b2e8-572a-4705-ba12-2a030476f45b 16069 0 2022-11-26 07:14:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-zhjw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-zhjw topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1907":"bootstrap-e2e-minion-group-zhjw","csi-mock-csi-mock-volumes-9498":"bootstrap-e2e-minion-group-zhjw"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 07:37:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 07:58:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 08:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-zhjw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:58:00 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:58:00 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:58:00 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:58:00 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:58:00 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:58:00 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:58:00 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:58:15 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:58:15 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:58:15 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:58:15 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.105.36.0,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-zhjw.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-zhjw.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cc67b7d9c606cf13b518cf0cb8b22fe6,SystemUUID:cc67b7d9-c606-cf13-b518-cf0cb8b22fe6,BootID:a06198bc-32f7-4d08-b37d-b3aaad431e87,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 08:01:58.783: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-zhjw Nov 26 08:01:58.830: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-zhjw Nov 26 08:01:58.890: INFO: kube-proxy-bootstrap-e2e-minion-group-zhjw started at 2022-11-26 07:14:28 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container kube-proxy ready: false, restart count 11 Nov 26 08:01:58.890: INFO: pod-subpath-test-inlinevolume-7tw8 started at 2022-11-26 07:17:28 +0000 UTC (1+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Init container init-volume-inlinevolume-7tw8 ready: true, restart count 0 Nov 26 08:01:58.890: INFO: Container test-container-subpath-inlinevolume-7tw8 ready: false, restart count 0 Nov 26 08:01:58.890: INFO: ilb-host-exec started at 2022-11-26 07:50:56 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 08:01:58.890: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:37:17 +0000 UTC (0+7 container statuses recorded) Nov 26 08:01:58.890: INFO: Container csi-attacher ready: false, restart count 7 Nov 26 08:01:58.890: INFO: Container csi-provisioner ready: false, restart count 7 Nov 26 08:01:58.890: INFO: Container csi-resizer ready: false, restart count 7 Nov 26 08:01:58.890: INFO: Container csi-snapshotter ready: false, restart count 7 Nov 26 08:01:58.890: INFO: Container hostpath ready: false, restart count 7 Nov 26 08:01:58.890: INFO: Container liveness-probe ready: false, restart count 7 Nov 26 08:01:58.890: INFO: Container node-driver-registrar ready: false, restart count 7 Nov 26 08:01:58.890: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-jnb62 started at 2022-11-26 07:17:24 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container agnhost-container ready: true, restart count 10 Nov 26 08:01:58.890: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-tk6j2 started at 2022-11-26 07:17:18 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container agnhost-container ready: false, restart count 10 Nov 26 08:01:58.890: INFO: pod-subpath-test-preprovisionedpv-62rx started at 2022-11-26 07:17:15 +0000 UTC (1+2 container statuses recorded) Nov 26 08:01:58.890: INFO: Init container init-volume-preprovisionedpv-62rx ready: true, restart count 9 Nov 26 08:01:58.890: INFO: Container test-container-subpath-preprovisionedpv-62rx ready: true, restart count 11 Nov 26 08:01:58.890: INFO: Container test-container-volume-preprovisionedpv-62rx ready: true, restart count 11 Nov 26 08:01:58.890: INFO: csi-mockplugin-0 started at 2022-11-26 07:27:24 +0000 UTC (0+4 container statuses recorded) Nov 26 08:01:58.890: INFO: Container busybox ready: false, restart count 8 Nov 26 08:01:58.890: INFO: Container csi-provisioner ready: false, restart count 9 Nov 26 08:01:58.890: INFO: Container driver-registrar ready: false, restart count 8 Nov 26 08:01:58.890: INFO: Container mock ready: false, restart count 8 Nov 26 08:01:58.890: INFO: pod-5a31e133-2897-4536-b4f3-5df6ba103b38 started at 2022-11-26 07:36:24 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container write-pod ready: false, restart count 0 Nov 26 08:01:58.890: INFO: metadata-proxy-v0.1-vzmrj started at 2022-11-26 07:14:29 +0000 UTC (0+2 container statuses recorded) Nov 26 08:01:58.890: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 08:01:58.890: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 08:01:58.890: INFO: httpd started at 2022-11-26 07:32:14 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container httpd ready: true, restart count 9 Nov 26 08:01:58.890: INFO: netserver-2 started at 2022-11-26 07:37:35 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container webserver ready: false, restart count 6 Nov 26 08:01:58.890: INFO: csi-hostpathplugin-0 started at 2022-11-26 07:36:36 +0000 UTC (0+7 container statuses recorded) Nov 26 08:01:58.890: INFO: Container csi-attacher ready: false, restart count 8 Nov 26 08:01:58.890: INFO: Container csi-provisioner ready: false, restart count 8 Nov 26 08:01:58.890: INFO: Container csi-resizer ready: false, restart count 8 Nov 26 08:01:58.890: INFO: Container csi-snapshotter ready: false, restart count 8 Nov 26 08:01:58.890: INFO: Container hostpath ready: false, restart count 8 Nov 26 08:01:58.890: INFO: Container liveness-probe ready: false, restart count 8 Nov 26 08:01:58.890: INFO: Container node-driver-registrar ready: false, restart count 10 Nov 26 08:01:58.890: INFO: metrics-server-v0.5.2-867b8754b9-72b8p started at 2022-11-26 07:15:04 +0000 UTC (0+2 container statuses recorded) Nov 26 08:01:58.890: INFO: Container metrics-server ready: false, restart count 13 Nov 26 08:01:58.890: INFO: Container metrics-server-nanny ready: false, restart count 13 Nov 26 08:01:58.890: INFO: pause-pod-deployment-5d788b4b5-r89pj started at 2022-11-26 07:50:52 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container agnhost-pause ready: false, restart count 3 Nov 26 08:01:58.890: INFO: lb-internal-d7rbp started at 2022-11-26 07:37:05 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container netexec ready: false, restart count 8 Nov 26 08:01:58.890: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-45fbb started at 2022-11-26 07:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container agnhost-container ready: true, restart count 7 Nov 26 08:01:58.890: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-jj84b started at 2022-11-26 07:17:14 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container agnhost-container ready: true, restart count 11 Nov 26 08:01:58.890: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-xd7km started at 2022-11-26 07:17:25 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container agnhost-container ready: true, restart count 9 Nov 26 08:01:58.890: INFO: csi-mockplugin-0 started at 2022-11-26 07:25:55 +0000 UTC (0+3 container statuses recorded) Nov 26 08:01:58.890: INFO: Container csi-provisioner ready: true, restart count 7 Nov 26 08:01:58.890: INFO: Container driver-registrar ready: true, restart count 7 Nov 26 08:01:58.890: INFO: Container mock ready: true, restart count 7 Nov 26 08:01:58.890: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 07:25:55 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container csi-attacher ready: true, restart count 8 Nov 26 08:01:58.890: INFO: pod-configmaps-3dd17a2e-0a49-47d7-a918-4415d8ce4938 started at 2022-11-26 07:36:16 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 08:01:58.890: INFO: pod-secrets-c63a1486-547e-48cc-b22e-6cd4b91f9310 started at 2022-11-26 07:37:10 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 08:01:58.890: INFO: pod-secrets-7e8aa3b9-9100-4846-be61-1284a101ae60 started at 2022-11-26 07:37:10 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 08:01:58.890: INFO: pod-subpath-test-preprovisionedpv-829z started at 2022-11-26 07:17:31 +0000 UTC (1+2 container statuses recorded) Nov 26 08:01:58.890: INFO: Init container init-volume-preprovisionedpv-829z ready: true, restart count 0 Nov 26 08:01:58.890: INFO: Container test-container-subpath-preprovisionedpv-829z ready: true, restart count 10 Nov 26 08:01:58.890: INFO: Container test-container-volume-preprovisionedpv-829z ready: true, restart count 10 Nov 26 08:01:58.890: INFO: pod-back-off-image started at 2022-11-26 07:36:24 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container back-off ready: false, restart count 9 Nov 26 08:01:58.890: INFO: hostexec-bootstrap-e2e-minion-group-zhjw-qtnr9 started at 2022-11-26 07:17:36 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container agnhost-container ready: true, restart count 9 Nov 26 08:01:58.890: INFO: external-local-pods-nhgcw started at 2022-11-26 07:50:49 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container netexec ready: false, restart count 5 Nov 26 08:01:58.890: INFO: konnectivity-agent-zm9hn started at 2022-11-26 07:14:42 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container konnectivity-agent ready: false, restart count 11 Nov 26 08:01:58.890: INFO: pod-subpath-test-preprovisionedpv-kvq4 started at 2022-11-26 07:17:31 +0000 UTC (1+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Init container init-volume-preprovisionedpv-kvq4 ready: true, restart count 0 Nov 26 08:01:58.890: INFO: Container test-container-subpath-preprovisionedpv-kvq4 ready: false, restart count 0 Nov 26 08:01:58.890: INFO: csi-mockplugin-0 started at 2022-11-26 07:17:10 +0000 UTC (0+3 container statuses recorded) Nov 26 08:01:58.890: INFO: Container csi-provisioner ready: true, restart count 8 Nov 26 08:01:58.890: INFO: Container driver-registrar ready: true, restart count 8 Nov 26 08:01:58.890: INFO: Container mock ready: true, restart count 8 Nov 26 08:01:58.890: INFO: pod-16452104-42be-4e22-9ea5-25ee39d95a22 started at 2022-11-26 07:17:33 +0000 UTC (0+1 container statuses recorded) Nov 26 08:01:58.890: INFO: Container write-pod ready: false, restart count 0 Nov 26 08:01:58.890: INFO: pod-subpath-test-inlinevolume-n2sg started at 2022-11-26 07:37:24 +0000 UTC (1+2 container statuses recorded) Nov 26 08:01:58.890: INFO: Init container init-volume-inlinevolume-n2sg ready: true, restart count 3 Nov 26 08:01:58.890: INFO: Container test-container-subpath-inlinevolume-n2sg ready: false, restart count 8 Nov 26 08:01:58.890: INFO: Container test-container-volume-inlinevolume-n2sg ready: false, restart count 8 Nov 26 08:01:59.135: INFO: Latency metrics for node bootstrap-e2e-minion-group-zhjw [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-5833" for this suite. 11/26/22 08:01:59.135
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sTCP\sservice\s\[Slow\]$'
test/e2e/framework/service/util.go:48 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc0034f1390, 0xc}, 0x77f6, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:48 +0x265 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:120 +0x465from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 07:37:26.023 Nov 26 07:37:26.024: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 07:37:26.027 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 07:37:26.205 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 07:37:26.294 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to change the type and ports of a TCP service [Slow] test/e2e/network/loadbalancer.go:77 Nov 26 07:37:26.517: INFO: namespace for TCP test: loadbalancers-8386 STEP: creating a TCP service mutability-test with type=ClusterIP in namespace loadbalancers-8386 11/26/22 07:37:26.599 Nov 26 07:37:26.703: INFO: service port TCP: 80 STEP: creating a pod to be part of the TCP service mutability-test 11/26/22 07:37:26.703 Nov 26 07:37:26.771: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 07:37:26.834: INFO: Found all 1 pods Nov 26 07:37:26.834: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [mutability-test-dphfv] Nov 26 07:37:26.834: INFO: Waiting up to 2m0s for pod "mutability-test-dphfv" in namespace "loadbalancers-8386" to be "running and ready" Nov 26 07:37:26.923: INFO: Pod "mutability-test-dphfv": Phase="Pending", Reason="", readiness=false. Elapsed: 89.326713ms Nov 26 07:37:26.923: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-dphfv' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:37:29.076: INFO: Pod "mutability-test-dphfv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242358899s Nov 26 07:37:29.076: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-dphfv' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:37:30.989: INFO: Pod "mutability-test-dphfv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154483158s Nov 26 07:37:30.989: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-dphfv' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:37:33.023: INFO: Pod "mutability-test-dphfv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189356764s Nov 26 07:37:33.024: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-dphfv' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:37:34.988: INFO: Pod "mutability-test-dphfv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154340076s Nov 26 07:37:34.988: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-dphfv' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:37:36.998: INFO: Pod "mutability-test-dphfv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.164316361s Nov 26 07:37:36.998: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-dphfv' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:37:39.030: INFO: Pod "mutability-test-dphfv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.195941126s Nov 26 07:37:39.030: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-dphfv' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:37:41.001: INFO: Pod "mutability-test-dphfv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.167158751s Nov 26 07:37:41.001: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-dphfv' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:37:43.013: INFO: Pod "mutability-test-dphfv": Phase="Running", Reason="", readiness=true. Elapsed: 16.178817173s Nov 26 07:37:43.013: INFO: Pod "mutability-test-dphfv" satisfied condition "running and ready" Nov 26 07:37:43.013: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [mutability-test-dphfv] STEP: changing the TCP service to type=NodePort 11/26/22 07:37:43.013 Nov 26 07:37:43.228: INFO: TCP node port: 30710 STEP: hitting the TCP service's NodePort 11/26/22 07:37:43.228 Nov 26 07:37:43.229: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:37:43.268: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:37:45.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:37:45.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:37:47.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:37:47.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:37:49.268: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:37:49.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:37:51.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:37:51.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:37:53.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:37:53.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:37:55.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:37:55.311: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: no route to host Nov 26 07:37:57.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:37:57.310: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: no route to host Nov 26 07:37:59.268: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:37:59.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: no route to host Nov 26 07:38:01.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:01.310: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: no route to host Nov 26 07:38:03.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:03.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: no route to host Nov 26 07:38:05.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:05.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:07.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:07.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:09.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:09.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:11.270: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:11.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:13.270: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:13.310: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:15.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:15.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:17.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:17.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:19.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:19.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:21.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:21.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:23.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:23.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:25.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:25.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:27.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:27.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:29.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:29.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:31.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:31.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:33.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:33.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:35.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:35.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:37.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:37.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:39.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:39.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:41.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:41.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:43.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:43.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:45.268: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:45.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:47.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:47.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:49.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:49.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:51.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:51.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:53.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:53.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:55.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:55.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:57.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:57.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:38:59.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:38:59.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:01.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:01.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:03.273: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:03.313: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:05.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:05.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:07.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:07.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:09.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:09.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:11.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:11.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:13.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:13.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:15.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:15.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:17.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:17.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:19.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:19.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:21.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:21.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:23.268: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:23.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:25.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:25.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:27.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:27.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:29.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:29.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:31.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:31.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:33.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:33.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:35.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:35.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:37.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:37.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:39.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:39.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:41.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:41.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:43.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:43.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:45.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:45.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:47.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:47.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:49.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:49.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:51.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:51.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:53.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:53.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:55.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:55.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:57.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:57.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:39:59.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:39:59.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:01.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:01.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:03.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:03.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:05.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:05.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:07.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:07.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:09.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:09.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:11.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:11.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:13.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:13.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:15.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:15.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:17.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:17.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:19.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:19.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:21.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:21.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:23.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:23.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:25.270: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:25.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:27.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:27.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:29.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:29.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:31.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:31.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:33.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:33.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:35.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:35.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:37.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:37.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:39.268: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:39.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:41.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:41.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:43.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:43.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:45.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:45.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:47.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:47.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:49.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:49.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:51.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:51.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:53.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:53.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:55.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:55.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:57.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:57.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:40:59.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:40:59.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:01.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:01.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:03.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:03.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:05.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:05.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:07.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:07.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:09.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:09.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:11.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:11.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:13.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:13.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:15.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:15.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:17.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:17.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:19.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:19.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:21.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:21.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:23.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:23.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:25.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:25.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:27.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:27.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:29.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:29.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:31.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:31.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:33.272: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:33.312: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:35.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:35.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:37.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:37.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:39.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:39.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:41.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:41.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:43.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:43.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:45.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:45.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:47.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:47.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:49.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:49.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:51.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:51.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:53.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:53.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:55.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:55.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:57.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:57.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:41:59.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:41:59.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:01.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:01.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:03.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:03.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:05.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:05.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:07.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:07.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:09.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:09.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:11.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:11.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:13.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:13.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:15.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:15.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:17.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:17.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:19.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:19.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:21.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:21.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:23.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:23.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:25.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:25.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 5m0.429s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's NodePort (Step Runtime: 4m43.224s) test/e2e/network/loadbalancer.go:119 Spec Goroutine goroutine 8707 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003ac0558, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2d?, 0xc003957c20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00146e870?, 0x766a5c9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc0034f1390, 0xc}, 0x77f6, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:120 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0017fcf00, 0xc00011ef80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 07:42:27.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:27.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:29.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:29.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:31.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:31.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:33.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:33.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:35.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:35.309: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:37.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:37.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:39.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:39.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:41.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:41.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:43.269: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:43.308: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:43.308: INFO: Poking "http://34.127.23.98:30710/echo?msg=hello" Nov 26 07:42:43.348: INFO: Poke("http://34.127.23.98:30710/echo?msg=hello"): Get "http://34.127.23.98:30710/echo?msg=hello": dial tcp 34.127.23.98:30710: connect: connection refused Nov 26 07:42:43.348: FAIL: Could not reach HTTP service through 34.127.23.98:30710 after 5m0s Full Stack Trace k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc0034f1390, 0xc}, 0x77f6, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:48 +0x265 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:120 +0x465 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 07:42:43.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 26 07:42:43.520: INFO: Output of kubectl describe svc: Nov 26 07:42:43.520: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.127.104.189 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8386 describe svc --namespace=loadbalancers-8386' Nov 26 07:42:43.873: INFO: stderr: "" Nov 26 07:42:43.873: INFO: stdout: "Name: mutability-test\nNamespace: loadbalancers-8386\nLabels: testid=mutability-test-b77198f8-0e29-4d68-94e8-8afa573b7f3c\nAnnotations: <none>\nSelector: testid=mutability-test-b77198f8-0e29-4d68-94e8-8afa573b7f3c\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.73.142\nIPs: 10.0.73.142\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nNodePort: <unset> 30710/TCP\nEndpoints: 10.64.2.209:80\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: <none>\n" Nov 26 07:42:43.873: INFO: Name: mutability-test Namespace: loadbalancers-8386 Labels: testid=mutability-test-b77198f8-0e29-4d68-94e8-8afa573b7f3c Annotations: <none> Selector: testid=mutability-test-b77198f8-0e29-4d68-94e8-8afa573b7f3c Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.73.142 IPs: 10.0.73.142 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30710/TCP Endpoints: 10.64.2.209:80 Session Affinity: None External Traffic Policy: Cluster Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 07:42:43.873 STEP: Collecting events from namespace "loadbalancers-8386". 11/26/22 07:42:43.873 STEP: Found 8 events. 11/26/22 07:42:43.915 Nov 26 07:42:43.915: INFO: At 2022-11-26 07:37:26 +0000 UTC - event for mutability-test: {replication-controller } SuccessfulCreate: Created pod: mutability-test-dphfv Nov 26 07:42:43.915: INFO: At 2022-11-26 07:37:26 +0000 UTC - event for mutability-test-dphfv: {default-scheduler } Scheduled: Successfully assigned loadbalancers-8386/mutability-test-dphfv to bootstrap-e2e-minion-group-zhjw Nov 26 07:42:43.915: INFO: At 2022-11-26 07:37:35 +0000 UTC - event for mutability-test-dphfv: {kubelet bootstrap-e2e-minion-group-zhjw} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 07:42:43.915: INFO: At 2022-11-26 07:37:35 +0000 UTC - event for mutability-test-dphfv: {kubelet bootstrap-e2e-minion-group-zhjw} Created: Created container netexec Nov 26 07:42:43.915: INFO: At 2022-11-26 07:37:35 +0000 UTC - event for mutability-test-dphfv: {kubelet bootstrap-e2e-minion-group-zhjw} Started: Started container netexec Nov 26 07:42:43.915: INFO: At 2022-11-26 07:37:37 +0000 UTC - event for mutability-test-dphfv: {kubelet bootstrap-e2e-minion-group-zhjw} Killing: Stopping container netexec Nov 26 07:42:43.915: INFO: At 2022-11-26 07:37:38 +0000 UTC - event for mutability-test-dphfv: {kubelet bootstrap-e2e-minion-group-zhjw} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 07:42:43.915: INFO: At 2022-11-26 07:37:43 +0000 UTC - event for mutability-test-dphfv: {kubelet bootstrap-e2e-minion-group-zhjw} BackOff: Back-off restarting failed container netexec in pod mutability-test-dphfv_loadbalancers-8386(1a7c7cbc-b515-4ad2-842e-084537423330) Nov 26 07:42:43.961: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 07:42:43.961: INFO: mutability-test-dphfv bootstrap-e2e-minion-group-zhjw Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:37:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:42:04 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:42:04 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 07:37:26 +0000 UTC }] Nov 26 07:42:43.961: INFO: Nov 26 07:42:44.005: INFO: Unable to fetch loadbalancers-8386/mutability-test-dphfv/netexec logs: an error on the server ("unknown") has prevented the request from succeeding (get pods mutability-test-dphfv) Nov 26 07:42:44.053: INFO: Logging node info for node bootstrap-e2e-master Nov 26 07:42:44.095: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master f12dfba9-8340-4384-a012-464bb8ff014b 13407 0 2022-11-26 07:14:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 07:40:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:40:14 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:40:14 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:40:14 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:40:14 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.127.104.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4341b6df721ee06de14317c6e64c7913,SystemUUID:4341b6df-721e-e06d-e143-17c6e64c7913,BootID:0fd660c7-349c-4c78-8001-012f07790551,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 07:42:44.096: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 07:42:44.140: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 07:42:44.184: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 26 07:42:44.184: INFO: Logging node info for node bootstrap-e2e-minion-group-svrn Nov 26 07:42:44.226: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-svrn 0b46f31f-d25c-4604-ba86-b3e98c09449d 13502 0 2022-11-26 07:14:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-svrn kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-svrn topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-9428":"bootstrap-e2e-minion-group-svrn","csi-hostpath-provisioning-9550":"bootstrap-e2e-minion-group-svrn","csi-mock-csi-mock-volumes-5988":"bootstrap-e2e-minion-group-svrn"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 07:38:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 07:40:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 07:41:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-svrn,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:40:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:40:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:40:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:40:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:40:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:40:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:40:07 +0000 UTC,LastTransitionTime:2022-11-26 07:14:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:38:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:38:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:38:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:38:35 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.23.98,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-svrn.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-svrn.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a792d55bc5ad5cdad144cb5b4dfa29f,SystemUUID:6a792d55-bc5a-d5cd-ad14-4cb5b4dfa29f,BootID:d19434b3-94eb-452d-a279-fc84362b7cab,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8372^ab1f7fc6-6d5c-11ed-96c7-c2ddb80fc067 kubernetes.io/csi/csi-mock-csi-mock-volumes-5988^133ed1f7-6d5d-11ed-8921-d2d874b08a41],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-8372^ab1f7fc6-6d5c-11ed-96c7-c2ddb80fc067,DevicePath:,},},Config:nil,},} Nov 26 07:42:44.226: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-svrn Nov 26 07:42:44.271: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-svrn Nov 26 07:42:44.314: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-svrn: error trying to reach service: No agent available Nov 26 07:42:44.314: INFO: Logging node info for node bootstrap-e2e-minion-group-v6kp Nov 26 07:42:44.356: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-v6kp 1b4c00d7-9f80-4c8f-bcb4-5fdf079da6d6 13614 0 2022-11-26 07:14:26 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-v6kp kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-v6kp topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8553":"bootstrap-e2e-minion-group-v6kp","csi-hostpath-multivolume-8709":"bootstrap-e2e-minion-group-v6kp","csi-hostpath-provisioning-4171":"bootstrap-e2e-minion-group-v6kp","csi-mock-csi-mock-volumes-4257":"bootstrap-e2e-minion-group-v6kp"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 07:36:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 07:40:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 07:42:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-v6kp,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:40:05 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:40:05 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:40:05 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:40:05 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:40:05 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:40:05 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:40:05 +0000 UTC,LastTransitionTime:2022-11-26 07:14:29 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:42:05 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:42:05 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:42:05 +0000 UTC,LastTransitionTime:2022-11-26 07:14:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:42:05 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.227.156.189,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-v6kp.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-v6kp.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:35b699b12f5019228f1e2e38d963976d,SystemUUID:35b699b1-2f50-1922-8f1e-2e38d963976d,BootID:5793a9ad-d1f5-4512-925a-2b321cb699ee,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-8553^6418808c-6d5c-11ed-83de-86d9cddca60a kubernetes.io/csi/csi-hostpath-provisioning-2652^5b9d621e-6d5a-11ed-bfab-ae8588c81627 kubernetes.io/csi/csi-hostpath-provisioning-4171^1732c9d0-6d5c-11ed-b59c-a2ff331b1a4f],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2652^5b9d621e-6d5a-11ed-bfab-ae8588c81627,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8553^6418808c-6d5c-11ed-83de-86d9cddca60a,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4171^1732c9d0-6d5c-11ed-b59c-a2ff331b1a4f,DevicePath:,},},Config:nil,},} Nov 26 07:42:44.357: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-v6kp Nov 26 07:42:44.402: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-v6kp Nov 26 07:42:44.455: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-v6kp: error trying to reach service: No agent available Nov 26 07:42:44.455: INFO: Logging node info for node bootstrap-e2e-minion-group-zhjw Nov 26 07:42:44.497: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-zhjw 02d1b2e8-572a-4705-ba12-2a030476f45b 13622 0 2022-11-26 07:14:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-zhjw kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-zhjw topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-489":"bootstrap-e2e-minion-group-zhjw","csi-hostpath-multivolume-8388":"bootstrap-e2e-minion-group-zhjw","csi-mock-csi-mock-volumes-1907":"bootstrap-e2e-minion-group-zhjw","csi-mock-csi-mock-volumes-9498":"bootstrap-e2e-minion-group-zhjw"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 07:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 07:14:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 07:37:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 07:40:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 07:42:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jenkins-cvm/us-west1-b/bootstrap-e2e-minion-group-zhjw,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 07:40:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 07:40:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 07:40:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 07:40:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 07:40:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 07:40:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 07:40:08 +0000 UTC,LastTransitionTime:2022-11-26 07:14:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 07:14:42 +0000 UTC,LastTransitionTime:2022-11-26 07:14:42 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 07:37:48 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 07:37:48 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 07:37:48 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 07:37:48 +0000 UTC,LastTransitionTime:2022-11-26 07:14:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.105.36.0,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-zhjw.c.k8s-jenkins-cvm.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-zhjw.c.k8s-jenkins-cvm.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cc67b7d9c606cf13b518cf0cb8b22fe6,SystemUUID:cc67b7d9-c606-cf13-b518-cf0cb8b22fe6,BootID:a06198bc-32f7-4d08-b37d-b3aaad431e87,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 07:42:44.498: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-zhjw Nov 26 07:42:44.542: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-zhjw Nov 26 07:42:44.585: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-zhjw: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-8386" for this suite. 11/26/22 07:42:44.585
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\screate\san\sinternal\stype\sload\sbalancer\s\[Slow\]$'
test/e2e/network/loadbalancer.go:638 k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:638 +0x634
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 07:37:05.135 Nov 26 07:37:05.136: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 07:37:05.137 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 07:37:05.265 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 07:37:05.345 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to create an internal type load balancer [Slow] test/e2e/network/loadbalancer.go:571 STEP: creating pod to be part of service lb-internal 11/26/22 07:37:05.519 Nov 26 07:37:05.566: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 07:37:05.608: INFO: Found all 1 pods Nov 26 07:37:05.608: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [lb-internal-d7rbp] Nov 26 07:37:05.608: INFO: Waiting up to 2m0s for pod "lb-internal-d7rbp" in namespace "loadbalancers-564" to be "running and ready" Nov 26 07:37:05.665: INFO: Pod "lb-internal-d7rbp": Phase="Pending", Reason="", readiness=false. Elapsed: 56.561182ms Nov 26 07:37:05.665: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-d7rbp' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:37:07.747: INFO: Pod "lb-internal-d7rbp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139419041s Nov 26 07:37:07.748: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-d7rbp' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:37:09.706: INFO: Pod "lb-internal-d7rbp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098233859s Nov 26 07:37:09.706: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-d7rbp' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:37:11.747: INFO: Pod "lb-internal-d7rbp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138909876s Nov 26 07:37:11.747: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-d7rbp' on 'bootstrap-e2e-minion-group-zhjw' to be 'Running' but was 'Pending' Nov 26 07:37:13.782: INFO: Pod "lb-internal-d7rbp": Phase="Running", Reason="", readiness=true. Elapsed: 8.174278014s Nov 26 07:37:13.782: INFO: Pod "lb-internal-d7rbp" satisfied condition "running and ready" Nov 26 07:37:13.782: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [lb-internal-d7rbp] STEP: creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled 11/26/22 07:37:13.782 Nov 26 07:37:13.965: INFO: Waiting up to 15m0s for service "lb-internal" to have a LoadBalancer Nov 26 07:38:42.098: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:44.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:46.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:48.069: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:50.071: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:52.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:54.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:56.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:38:58.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:00.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:02.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:04.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:06.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:08.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:10.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:12.071: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:14.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:16.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:18.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:20.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:22.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:24.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:26.071: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:28.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:30.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:32.069: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:34.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:36.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:38.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:40.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:42.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:44.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:46.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:48.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:50.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:52.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:54.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:56.069: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused Nov 26 07:39:58.070: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.127.104.189/api/v1/namespaces/loadbalancers-564/services/lb-internal": dial tcp 34.127.104.189:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 5m0.337s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 4m51.69s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 2743 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000d1db00, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0007d6120?, 0xc001b81b80?, 0x262a967?)