go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sServers\swith\ssupport\sfor\sAPI\schunking\sshould\ssupport\scontinue\slisting\sfrom\sthe\slast\skey\sif\sthe\soriginal\sversion\shas\sbeen\scompacted\saway\,\sthough\sthe\slist\sis\sinconsistent\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00118e780) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-api-machinery] Servers with support for API chunking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:08:29.132 Nov 25 15:08:29.133: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename chunking 11/25/22 15:08:29.134 Nov 25 15:08:29.173: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:31.213: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:33.213: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:35.213: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:37.214: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:39.213: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:41.213: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:43.213: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:45.214: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:47.213: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:49.214: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:51.214: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:53.213: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:55.213: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:57.214: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:59.213: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:59.252: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:59.252: INFO: Unexpected error: <*errors.errorString | 0xc0001fda10>: { s: "timed out waiting for the condition", } Nov 25 15:08:59.252: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00118e780) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/node/init/init.go:32 Nov 25 15:08:59.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:08:59.292 [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\sjobs\swhen\ssuspended\s\[Slow\]\s\[Conformance\]$'
test/e2e/apps/cronjob.go:111 k8s.io/kubernetes/test/e2e/apps.glob..func2.2() test/e2e/apps/cronjob.go:111 +0x376 There were additional failures detected after the initial failure: [FAILED] Nov 25 14:59:56.334: failed to list events in namespace "cronjob-6456": Get "https://34.82.189.151/api/v1/namespaces/cronjob-6456/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 14:59:56.375: Couldn't delete ns: "cronjob-6456": Delete "https://34.82.189.151/api/v1/namespaces/cronjob-6456": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/cronjob-6456", Err:(*net.OpError)(0xc001ab38b0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 14:58:29.69 Nov 25 14:58:29.690: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/25/22 14:58:29.694 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 14:58:29.85 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 14:58:29.991 [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/apps/cronjob.go:96 STEP: Creating a suspended cronjob 11/25/22 14:58:30.105 STEP: Ensuring no jobs are scheduled 11/25/22 14:58:30.173 STEP: Ensuring no job exists by listing jobs explicitly 11/25/22 14:59:56.214 Nov 25 14:59:56.254: INFO: Unexpected error: Failed to list the CronJobs in namespace cronjob-6456: <*url.Error | 0xc001987a40>: { Op: "Get", URL: "https://34.82.189.151/apis/batch/v1/namespaces/cronjob-6456/jobs", Err: <*net.OpError | 0xc00172b400>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001d456b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001545420>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 14:59:56.254: FAIL: Failed to list the CronJobs in namespace cronjob-6456: Get "https://34.82.189.151/apis/batch/v1/namespaces/cronjob-6456/jobs": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func2.2() test/e2e/apps/cronjob.go:111 +0x376 [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 25 14:59:56.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 14:59:56.294 STEP: Collecting events from namespace "cronjob-6456". 11/25/22 14:59:56.294 Nov 25 14:59:56.334: INFO: Unexpected error: failed to list events in namespace "cronjob-6456": <*url.Error | 0xc001d45cb0>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/cronjob-6456/events", Err: <*net.OpError | 0xc001c6d1d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001d45c80>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000111d40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 14:59:56.334: FAIL: failed to list events in namespace "cronjob-6456": Get "https://34.82.189.151/api/v1/namespaces/cronjob-6456/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc000d6a5c0, {0xc0038258f0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc001f264e0}, {0xc0038258f0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc000d6a650?, {0xc0038258f0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00080b860) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000962750?, 0x13?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000962750?, 0x29449fc?}, {0xae73300?, 0xc001f66780?, 0xc000d93ed8?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-6456" for this suite. 11/25/22 14:59:56.334 Nov 25 14:59:56.375: FAIL: Couldn't delete ns: "cronjob-6456": Delete "https://34.82.189.151/api/v1/namespaces/cronjob-6456": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/cronjob-6456", Err:(*net.OpError)(0xc001ab38b0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00080b860) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000962640?, 0x13?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000962640?, 0x0?}, {0xae73300?, 0x5?, 0xc0009fa558?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\snew\sjobs\swhen\sForbidConcurrent\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0009b3860) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:01:53.956 Nov 25 15:01:53.956: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/25/22 15:01:53.958 W1125 15:01:54.159148 10107 reflector.go:424] vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ServiceAccount: Get "https://34.82.189.151/api/v1/namespaces/cronjob-755/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": dial tcp 34.82.189.151:443: connect: connection refused E1125 15:01:54.159300 10107 reflector.go:140] vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://34.82.189.151/api/v1/namespaces/cronjob-755/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": dial tcp 34.82.189.151:443: connect: connection refused W1125 15:01:55.234666 10107 reflector.go:424] vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ServiceAccount: Get "https://34.82.189.151/api/v1/namespaces/cronjob-755/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": dial tcp 34.82.189.151:443: connect: connection refused E1125 15:01:55.234717 10107 reflector.go:140] vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://34.82.189.151/api/v1/namespaces/cronjob-755/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": dial tcp 34.82.189.151:443: connect: connection refused W1125 15:01:57.784405 10107 reflector.go:424] vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ServiceAccount: Get "https://34.82.189.151/api/v1/namespaces/cronjob-755/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": dial tcp 34.82.189.151:443: connect: connection refused E1125 15:01:57.784461 10107 reflector.go:140] vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://34.82.189.151/api/v1/namespaces/cronjob-755/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": dial tcp 34.82.189.151:443: connect: connection refused W1125 15:02:01.054087 10107 reflector.go:424] vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ServiceAccount: Get "https://34.82.189.151/api/v1/namespaces/cronjob-755/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": dial tcp 34.82.189.151:443: connect: connection refused E1125 15:02:01.054139 10107 reflector.go:140] vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://34.82.189.151/api/v1/namespaces/cronjob-755/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": dial tcp 34.82.189.151:443: connect: connection refused W1125 15:02:10.711399 10107 reflector.go:424] vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ServiceAccount: Get "https://34.82.189.151/api/v1/namespaces/cronjob-755/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": dial tcp 34.82.189.151:443: connect: connection refused E1125 15:02:10.711451 10107 reflector.go:140] vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://34.82.189.151/api/v1/namespaces/cronjob-755/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": dial tcp 34.82.189.151:443: connect: connection refused W1125 15:02:32.061192 10107 reflector.go:424] vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ServiceAccount: Get "https://34.82.189.151/api/v1/namespaces/cronjob-755/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": dial tcp 34.82.189.151:443: connect: connection refused E1125 15:02:32.061248 10107 reflector.go:140] vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://34.82.189.151/api/v1/namespaces/cronjob-755/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:03:54.120: INFO: Unexpected error: <*fmt.wrapError | 0xc00508c000>: { msg: "wait for service account \"default\" in namespace \"cronjob-755\": timed out waiting for the condition", err: <*errors.errorString | 0xc0001fd9c0>{ s: "timed out waiting for the condition", }, } Nov 25 15:03:54.121: FAIL: wait for service account "default" in namespace "cronjob-755": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0009b3860) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 25 15:03:54.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:03:54.205 STEP: Collecting events from namespace "cronjob-755". 11/25/22 15:03:54.205 STEP: Found 0 events. 11/25/22 15:03:54.247 Nov 25 15:03:54.289: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 15:03:54.289: INFO: Nov 25 15:03:54.331: INFO: Logging node info for node bootstrap-e2e-master Nov 25 15:03:54.373: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 57fbafcc-fd48-4c2a-b8af-d2f45e071824 2855 0 2022-11-25 14:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 14:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 14:55:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 14:55:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 15:01:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:50 +0000 UTC,LastTransitionTime:2022-11-25 14:55:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:01:02 +0000 UTC,LastTransitionTime:2022-11-25 14:55:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:01:02 +0000 UTC,LastTransitionTime:2022-11-25 14:55:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:01:02 +0000 UTC,LastTransitionTime:2022-11-25 14:55:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:01:02 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.189.151,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a27504a9a8de9326ab25236db517b6d4,SystemUUID:a27504a9-a8de-9326-ab25-236db517b6d4,BootID:fd4b6e0f-8d3b-43d1-8d87-0b5f34de48b4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 15:03:54.373: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 15:03:54.420: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 15:03:54.489: INFO: metadata-proxy-v0.1-2v8cl started at 2022-11-25 14:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 15:03:54.489: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 15:03:54.489: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 15:03:54.489: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.489: INFO: Container etcd-container ready: true, restart count 0 Nov 25 15:03:54.489: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.489: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 25 15:03:54.489: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.489: INFO: Container kube-apiserver ready: true, restart count 2 Nov 25 15:03:54.489: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.489: INFO: Container kube-controller-manager ready: false, restart count 4 Nov 25 15:03:54.489: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 14:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.489: INFO: Container kube-addon-manager ready: true, restart count 1 Nov 25 15:03:54.489: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 14:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.489: INFO: Container l7-lb-controller ready: false, restart count 5 Nov 25 15:03:54.489: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.489: INFO: Container etcd-container ready: true, restart count 1 Nov 25 15:03:54.489: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.489: INFO: Container kube-scheduler ready: true, restart count 5 Nov 25 15:03:54.692: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 15:03:54.692: INFO: Logging node info for node bootstrap-e2e-minion-group-cs2j Nov 25 15:03:54.736: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-cs2j 709b4477-dd95-4ae0-b576-f41790f3abc7 3634 0 2022-11-25 14:55:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-cs2j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-cs2j topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-9023":"bootstrap-e2e-minion-group-cs2j"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 15:00:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 15:01:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 15:03:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-cs2j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:37 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:01:46 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:01:46 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:01:46 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:01:46 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.154.188,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-cs2j.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-cs2j.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:009dcaae494ddb3388c5512015911a5e,SystemUUID:009dcaae-494d-db33-88c5-512015911a5e,BootID:0ab614df-9d04-456f-9e89-54d5c6a29e6a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9023^0ea6cf48-6cd2-11ed-b9eb-96728155b2c0],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9023^0ea6cf48-6cd2-11ed-b9eb-96728155b2c0,DevicePath:,},},Config:nil,},} Nov 25 15:03:54.736: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-cs2j Nov 25 15:03:54.781: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-cs2j Nov 25 15:03:54.966: INFO: kube-proxy-bootstrap-e2e-minion-group-cs2j started at 2022-11-25 14:55:30 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container kube-proxy ready: true, restart count 4 Nov 25 15:03:54.966: INFO: csi-hostpathplugin-0 started at 2022-11-25 14:58:32 +0000 UTC (0+7 container statuses recorded) Nov 25 15:03:54.966: INFO: Container csi-attacher ready: false, restart count 1 Nov 25 15:03:54.966: INFO: Container csi-provisioner ready: false, restart count 1 Nov 25 15:03:54.966: INFO: Container csi-resizer ready: false, restart count 1 Nov 25 15:03:54.966: INFO: Container csi-snapshotter ready: false, restart count 1 Nov 25 15:03:54.966: INFO: Container hostpath ready: false, restart count 1 Nov 25 15:03:54.966: INFO: Container liveness-probe ready: false, restart count 1 Nov 25 15:03:54.966: INFO: Container node-driver-registrar ready: false, restart count 1 Nov 25 15:03:54.966: INFO: nfs-io-client started at 2022-11-25 14:59:28 +0000 UTC (1+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Init container nfs-io-init ready: true, restart count 0 Nov 25 15:03:54.966: INFO: Container nfs-io-client ready: false, restart count 0 Nov 25 15:03:54.966: INFO: metadata-proxy-v0.1-jj4l2 started at 2022-11-25 14:55:31 +0000 UTC (0+2 container statuses recorded) Nov 25 15:03:54.966: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 15:03:54.966: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 15:03:54.966: INFO: konnectivity-agent-zd86w started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container konnectivity-agent ready: true, restart count 3 Nov 25 15:03:54.966: INFO: hostexec-bootstrap-e2e-minion-group-cs2j-grzvg started at 2022-11-25 15:01:41 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 15:03:54.966: INFO: hostexec-bootstrap-e2e-minion-group-cs2j-kzgc5 started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:03:54.966: INFO: hostexec-bootstrap-e2e-minion-group-cs2j-w5p2t started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:03:54.966: INFO: affinity-lb-sx85v started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container affinity-lb ready: true, restart count 1 Nov 25 15:03:54.966: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:01:31 +0000 UTC (0+7 container statuses recorded) Nov 25 15:03:54.966: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 15:03:54.966: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 15:03:54.966: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 15:03:54.966: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 15:03:54.966: INFO: Container hostpath ready: true, restart count 2 Nov 25 15:03:54.966: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 15:03:54.966: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 15:03:54.966: INFO: l7-default-backend-8549d69d99-9c99n started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 15:03:54.966: INFO: coredns-6d97d5ddb-62vqw started at 2022-11-25 14:55:49 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container coredns ready: false, restart count 5 Nov 25 15:03:54.966: INFO: csi-hostpathplugin-0 started at 2022-11-25 14:59:28 +0000 UTC (0+7 container statuses recorded) Nov 25 15:03:54.966: INFO: Container csi-attacher ready: false, restart count 2 Nov 25 15:03:54.966: INFO: Container csi-provisioner ready: false, restart count 2 Nov 25 15:03:54.966: INFO: Container csi-resizer ready: false, restart count 2 Nov 25 15:03:54.966: INFO: Container csi-snapshotter ready: false, restart count 2 Nov 25 15:03:54.966: INFO: Container hostpath ready: false, restart count 2 Nov 25 15:03:54.966: INFO: Container liveness-probe ready: false, restart count 2 Nov 25 15:03:54.966: INFO: Container node-driver-registrar ready: false, restart count 2 Nov 25 15:03:54.966: INFO: pod-d4e49fe6-cb19-4441-805a-ab6bcf78fefc started at 2022-11-25 14:59:51 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:03:54.966: INFO: hostexec-bootstrap-e2e-minion-group-cs2j-n2wrg started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container agnhost-container ready: true, restart count 3 Nov 25 15:03:54.966: INFO: hostexec-bootstrap-e2e-minion-group-cs2j-8pmc5 started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 15:03:54.966: INFO: pod-ddee8992-7f2b-418d-a1ff-6286a761b8e6 started at 2022-11-25 14:59:39 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:03:54.966: INFO: pod-subpath-test-preprovisionedpv-8bdm started at 2022-11-25 15:01:45 +0000 UTC (1+2 container statuses recorded) Nov 25 15:03:54.966: INFO: Init container init-volume-preprovisionedpv-8bdm ready: true, restart count 1 Nov 25 15:03:54.966: INFO: Container test-container-subpath-preprovisionedpv-8bdm ready: false, restart count 2 Nov 25 15:03:54.966: INFO: Container test-container-volume-preprovisionedpv-8bdm ready: false, restart count 2 Nov 25 15:03:54.966: INFO: coredns-6d97d5ddb-gzrc5 started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container coredns ready: false, restart count 5 Nov 25 15:03:54.966: INFO: kube-dns-autoscaler-5f6455f985-q4zhz started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container autoscaler ready: false, restart count 5 Nov 25 15:03:54.966: INFO: volume-snapshot-controller-0 started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container volume-snapshot-controller ready: false, restart count 5 Nov 25 15:03:54.966: INFO: reallocate-nodeport-test-mkwml started at 2022-11-25 14:58:49 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container netexec ready: false, restart count 4 Nov 25 15:03:54.966: INFO: netserver-0 started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Container webserver ready: true, restart count 3 Nov 25 15:03:54.966: INFO: pod-subpath-test-dynamicpv-z4lq started at 2022-11-25 15:01:38 +0000 UTC (1+1 container statuses recorded) Nov 25 15:03:54.966: INFO: Init container init-volume-dynamicpv-z4lq ready: true, restart count 0 Nov 25 15:03:54.966: INFO: Container test-container-subpath-dynamicpv-z4lq ready: false, restart count 0 Nov 25 15:03:55.491: INFO: Latency metrics for node bootstrap-e2e-minion-group-cs2j Nov 25 15:03:55.491: INFO: Logging node info for node bootstrap-e2e-minion-group-nfrc Nov 25 15:03:55.534: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-nfrc 32e3ddf0-9230-4008-a6d2-35385dd6942e 2805 0 2022-11-25 14:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-nfrc kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-nfrc topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8209":"bootstrap-e2e-minion-group-nfrc"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 14:59:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 15:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {node-problem-detector Update v1 2022-11-25 15:00:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-nfrc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:50 +0000 UTC,LastTransitionTime:2022-11-25 14:55:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:00:31 +0000 UTC,LastTransitionTime:2022-11-25 14:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:00:31 +0000 UTC,LastTransitionTime:2022-11-25 14:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:00:31 +0000 UTC,LastTransitionTime:2022-11-25 14:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:00:31 +0000 UTC,LastTransitionTime:2022-11-25 14:55:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.169.41,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-nfrc.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-nfrc.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:584471f9c540880f2541598af76fd197,SystemUUID:584471f9-c540-880f-2541-598af76fd197,BootID:925b3820-ba2a-4f24-949e-2611ee406076,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-8209^ad4cfbc5-6cd1-11ed-9cc2-ea835e3ab61a kubernetes.io/csi/csi-hostpath-multivolume-8209^ae9f3a1c-6cd1-11ed-9cc2-ea835e3ab61a],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8209^ad4cfbc5-6cd1-11ed-9cc2-ea835e3ab61a,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8209^ae9f3a1c-6cd1-11ed-9cc2-ea835e3ab61a,DevicePath:,},},Config:nil,},} Nov 25 15:03:55.535: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-nfrc Nov 25 15:03:55.578: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-nfrc Nov 25 15:03:55.663: INFO: test-hostpath-type-6mlxk started at 2022-11-25 14:59:54 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 15:03:55.663: INFO: external-provisioner-vmwv7 started at 2022-11-25 15:01:30 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 25 15:03:55.663: INFO: nfs-server started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container nfs-server ready: true, restart count 0 Nov 25 15:03:55.663: INFO: pod-74aca48d-b1cc-47b2-a607-2327728b5c63 started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:03:55.663: INFO: hostexec-bootstrap-e2e-minion-group-nfrc-glccf started at 2022-11-25 15:01:28 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:03:55.663: INFO: hostexec-bootstrap-e2e-minion-group-nfrc-8qwt6 started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:03:55.663: INFO: test-hostpath-type-hf5zm started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container host-path-sh-testing ready: false, restart count 0 Nov 25 15:03:55.663: INFO: affinity-lb-nhvsd started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container affinity-lb ready: true, restart count 0 Nov 25 15:03:55.663: INFO: hostexec-bootstrap-e2e-minion-group-nfrc-rt6nn started at 2022-11-25 14:59:52 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 15:03:55.663: INFO: pod-b00facf7-169d-4380-8624-41a19caf7ad7 started at 2022-11-25 15:01:44 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:03:55.663: INFO: csi-hostpathplugin-0 started at 2022-11-25 14:58:44 +0000 UTC (0+7 container statuses recorded) Nov 25 15:03:55.663: INFO: Container csi-attacher ready: true, restart count 4 Nov 25 15:03:55.663: INFO: Container csi-provisioner ready: true, restart count 4 Nov 25 15:03:55.663: INFO: Container csi-resizer ready: true, restart count 4 Nov 25 15:03:55.663: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 25 15:03:55.663: INFO: Container hostpath ready: true, restart count 4 Nov 25 15:03:55.663: INFO: Container liveness-probe ready: true, restart count 4 Nov 25 15:03:55.663: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 25 15:03:55.663: INFO: netserver-1 started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container webserver ready: true, restart count 2 Nov 25 15:03:55.663: INFO: pod-subpath-test-inlinevolume-jxs5 started at 2022-11-25 15:01:37 +0000 UTC (1+2 container statuses recorded) Nov 25 15:03:55.663: INFO: Init container init-volume-inlinevolume-jxs5 ready: true, restart count 0 Nov 25 15:03:55.663: INFO: Container test-container-subpath-inlinevolume-jxs5 ready: true, restart count 2 Nov 25 15:03:55.663: INFO: Container test-container-volume-inlinevolume-jxs5 ready: true, restart count 2 Nov 25 15:03:55.663: INFO: kube-proxy-bootstrap-e2e-minion-group-nfrc started at 2022-11-25 14:55:35 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container kube-proxy ready: false, restart count 4 Nov 25 15:03:55.663: INFO: pod-subpath-test-inlinevolume-mpgz started at 2022-11-25 15:01:28 +0000 UTC (1+2 container statuses recorded) Nov 25 15:03:55.663: INFO: Init container init-volume-inlinevolume-mpgz ready: true, restart count 2 Nov 25 15:03:55.663: INFO: Container test-container-subpath-inlinevolume-mpgz ready: true, restart count 2 Nov 25 15:03:55.663: INFO: Container test-container-volume-inlinevolume-mpgz ready: true, restart count 2 Nov 25 15:03:55.663: INFO: konnectivity-agent-2vkfh started at 2022-11-25 14:55:50 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container konnectivity-agent ready: true, restart count 4 Nov 25 15:03:55.663: INFO: hostexec-bootstrap-e2e-minion-group-nfrc-7mz5b started at 2022-11-25 14:59:50 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container agnhost-container ready: false, restart count 2 Nov 25 15:03:55.663: INFO: test-hostpath-type-csl98 started at 2022-11-25 15:01:28 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 15:03:55.663: INFO: test-hostpath-type-tjzp5 started at 2022-11-25 15:01:43 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container host-path-testing ready: true, restart count 0 Nov 25 15:03:55.663: INFO: metadata-proxy-v0.1-rfhls started at 2022-11-25 14:55:36 +0000 UTC (0+2 container statuses recorded) Nov 25 15:03:55.663: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 15:03:55.663: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 15:03:55.663: INFO: execpod-acceptfgmkh started at 2022-11-25 14:59:50 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 15:03:55.663: INFO: test-hostpath-type-zhmhh started at 2022-11-25 15:01:49 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 15:03:55.663: INFO: pod-acfbc4f2-eb46-487c-beec-554254dadba8 started at 2022-11-25 14:59:43 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:03:55.663: INFO: external-provisioner-626zt started at 2022-11-25 14:58:30 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container nfs-provisioner ready: false, restart count 3 Nov 25 15:03:55.663: INFO: external-provisioner-nkqnt started at 2022-11-25 14:59:52 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:55.663: INFO: Container nfs-provisioner ready: true, restart count 3 Nov 25 15:03:55.964: INFO: Latency metrics for node bootstrap-e2e-minion-group-nfrc Nov 25 15:03:55.964: INFO: Logging node info for node bootstrap-e2e-minion-group-xfgk Nov 25 15:03:56.008: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-xfgk ba54c0d2-29af-426e-a049-7278d60a9490 3602 0 2022-11-25 14:55:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-xfgk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-xfgk topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5560":"bootstrap-e2e-minion-group-xfgk","csi-hostpath-multivolume-7269":"bootstrap-e2e-minion-group-xfgk"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 14:58:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 15:00:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 15:03:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-xfgk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:50 +0000 UTC,LastTransitionTime:2022-11-25 14:55:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:00:31 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:00:31 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:00:31 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:00:31 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.196.45,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-xfgk.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-xfgk.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:35564864f08206045e292b7e32d4bbba,SystemUUID:35564864-f082-0604-5e29-2b7e32d4bbba,BootID:303b460c-3762-4624-8d44-d7a3124b5e6c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-5560^a7b41a64-6cd1-11ed-90f7-ee2d44c6e29f],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-5560^a7b41a64-6cd1-11ed-90f7-ee2d44c6e29f,DevicePath:,},},Config:nil,},} Nov 25 15:03:56.008: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-xfgk Nov 25 15:03:56.056: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-xfgk Nov 25 15:03:56.130: INFO: kube-proxy-bootstrap-e2e-minion-group-xfgk started at 2022-11-25 14:55:34 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container kube-proxy ready: true, restart count 4 Nov 25 15:03:56.130: INFO: hostexec-bootstrap-e2e-minion-group-xfgk-cznn8 started at 2022-11-25 15:01:28 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 15:03:56.130: INFO: hostexec-bootstrap-e2e-minion-group-xfgk-x8ttp started at 2022-11-25 14:58:51 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 15:03:56.130: INFO: hostexec-bootstrap-e2e-minion-group-xfgk-vl9kh started at 2022-11-25 15:01:30 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:03:56.130: INFO: var-expansion-39f058ab-2eab-4367-85ce-d5109afbf080 started at 2022-11-25 14:58:30 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container dapi-container ready: false, restart count 0 Nov 25 15:03:56.130: INFO: csi-mockplugin-0 started at 2022-11-25 14:58:32 +0000 UTC (0+4 container statuses recorded) Nov 25 15:03:56.130: INFO: Container busybox ready: false, restart count 3 Nov 25 15:03:56.130: INFO: Container csi-provisioner ready: true, restart count 4 Nov 25 15:03:56.130: INFO: Container driver-registrar ready: true, restart count 4 Nov 25 15:03:56.130: INFO: Container mock ready: true, restart count 4 Nov 25 15:03:56.130: INFO: hostexec-bootstrap-e2e-minion-group-xfgk-g6lzz started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:03:56.130: INFO: pod-subpath-test-inlinevolume-gcnh started at 2022-11-25 14:59:51 +0000 UTC (1+2 container statuses recorded) Nov 25 15:03:56.130: INFO: Init container init-volume-inlinevolume-gcnh ready: true, restart count 1 Nov 25 15:03:56.130: INFO: Container test-container-subpath-inlinevolume-gcnh ready: true, restart count 1 Nov 25 15:03:56.130: INFO: Container test-container-volume-inlinevolume-gcnh ready: true, restart count 1 Nov 25 15:03:56.130: INFO: metrics-server-v0.5.2-867b8754b9-4d9k2 started at 2022-11-25 14:55:55 +0000 UTC (0+2 container statuses recorded) Nov 25 15:03:56.130: INFO: Container metrics-server ready: false, restart count 4 Nov 25 15:03:56.130: INFO: Container metrics-server-nanny ready: false, restart count 5 Nov 25 15:03:56.130: INFO: hostexec-bootstrap-e2e-minion-group-xfgk-6tq5z started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 15:03:56.130: INFO: csi-hostpathplugin-0 started at 2022-11-25 14:58:32 +0000 UTC (0+7 container statuses recorded) Nov 25 15:03:56.130: INFO: Container csi-attacher ready: true, restart count 3 Nov 25 15:03:56.130: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 15:03:56.130: INFO: Container csi-resizer ready: true, restart count 3 Nov 25 15:03:56.130: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 25 15:03:56.130: INFO: Container hostpath ready: true, restart count 3 Nov 25 15:03:56.130: INFO: Container liveness-probe ready: true, restart count 3 Nov 25 15:03:56.130: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 25 15:03:56.130: INFO: pod-590f7d35-2f3d-495d-bd05-1b5354a0e9cc started at 2022-11-25 14:58:45 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:03:56.130: INFO: affinity-lb-ljvdn started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container affinity-lb ready: true, restart count 1 Nov 25 15:03:56.130: INFO: local-io-client started at 2022-11-25 15:01:44 +0000 UTC (1+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Init container local-io-init ready: true, restart count 0 Nov 25 15:03:56.130: INFO: Container local-io-client ready: false, restart count 0 Nov 25 15:03:56.130: INFO: volume-prep-provisioning-1445 started at 2022-11-25 15:01:45 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container init-volume-provisioning-1445 ready: false, restart count 0 Nov 25 15:03:56.130: INFO: netserver-2 started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container webserver ready: true, restart count 1 Nov 25 15:03:56.130: INFO: pod-7e54bb8d-001b-46bb-9722-23f799ce7bb1 started at 2022-11-25 15:01:46 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:03:56.130: INFO: volume-prep-provisioning-9978 started at 2022-11-25 15:01:47 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container init-volume-provisioning-9978 ready: false, restart count 0 Nov 25 15:03:56.130: INFO: hostexec-bootstrap-e2e-minion-group-xfgk-5xt4b started at 2022-11-25 15:01:28 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 15:03:56.130: INFO: hostexec-bootstrap-e2e-minion-group-xfgk-xpsv2 started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:03:56.130: INFO: hostexec-bootstrap-e2e-minion-group-xfgk-ql4gr started at 2022-11-25 15:01:44 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:03:56.130: INFO: metadata-proxy-v0.1-nfk54 started at 2022-11-25 14:55:35 +0000 UTC (0+2 container statuses recorded) Nov 25 15:03:56.130: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 15:03:56.130: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 15:03:56.130: INFO: csi-hostpathplugin-0 started at 2022-11-25 14:59:48 +0000 UTC (0+7 container statuses recorded) Nov 25 15:03:56.130: INFO: Container csi-attacher ready: true, restart count 1 Nov 25 15:03:56.130: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 15:03:56.130: INFO: Container csi-resizer ready: true, restart count 1 Nov 25 15:03:56.130: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 25 15:03:56.130: INFO: Container hostpath ready: true, restart count 1 Nov 25 15:03:56.130: INFO: Container liveness-probe ready: true, restart count 1 Nov 25 15:03:56.130: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 25 15:03:56.130: INFO: pvc-volume-tester-vfs8x started at 2022-11-25 14:59:50 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container volume-tester ready: false, restart count 0 Nov 25 15:03:56.130: INFO: pod-subpath-test-preprovisionedpv-6ng9 started at 2022-11-25 15:01:45 +0000 UTC (1+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Init container init-volume-preprovisionedpv-6ng9 ready: true, restart count 0 Nov 25 15:03:56.130: INFO: Container test-container-subpath-preprovisionedpv-6ng9 ready: false, restart count 0 Nov 25 15:03:56.130: INFO: konnectivity-agent-sz497 started at 2022-11-25 14:55:50 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container konnectivity-agent ready: false, restart count 2 Nov 25 15:03:56.130: INFO: pod-configmaps-04565d9c-c879-4e8e-9fe4-0833d5d0f610 started at 2022-11-25 14:58:30 +0000 UTC (0+1 container statuses recorded) Nov 25 15:03:56.130: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 15:03:56.391: INFO: Latency metrics for node bootstrap-e2e-minion-group-xfgk [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-755" for this suite. 11/25/22 15:03:56.391
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/statefulset/rest.go:69 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc00290eea0}, 0xc000ae0500) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc000500400}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc0023bddb8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc0023bde48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00290eea0?, 0xc0023bde88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00290eea0}, 0x3, 0x3, 0xc000ae0500) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 +0x3d0 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:18:31.122: Get "https://34.82.189.151/apis/apps/v1/namespaces/statefulset-5582/statefulsets": dial tcp 34.82.189.151:443: connect: connection refused In [AfterEach] at: test/e2e/framework/statefulset/rest.go:76 ---------- [FAILED] Nov 25 15:18:31.201: failed to list events in namespace "statefulset-5582": Get "https://34.82.189.151/api/v1/namespaces/statefulset-5582/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:18:31.241: Couldn't delete ns: "statefulset-5582": Delete "https://34.82.189.151/api/v1/namespaces/statefulset-5582": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/statefulset-5582", Err:(*net.OpError)(0xc001844910)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:12:50.859 Nov 25 15:12:50.859: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/25/22 15:12:50.861 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:12:51.328 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:12:51.546 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 STEP: Creating service test in namespace statefulset-5582 11/25/22 15:12:51.671 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] test/e2e/apps/statefulset.go:697 STEP: Creating stateful set ss in namespace statefulset-5582 11/25/22 15:12:51.754 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5582 11/25/22 15:12:51.834 Nov 25 15:12:51.931: INFO: Found 0 stateful pods, waiting for 1 Nov 25 15:13:01.987: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 11/25/22 15:13:01.987 Nov 25 15:13:02.066: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 15:13:03.572: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 25 15:13:03.572: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 25 15:13:03.572: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 25 15:13:03.646: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 25 15:13:13.737: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 25 15:13:13.737: INFO: Waiting for statefulset status.replicas updated to 0 Nov 25 15:13:14.067: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 15:13:14.067: INFO: ss-0 bootstrap-e2e-minion-group-xfgk Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:52 +0000 UTC }] Nov 25 15:13:14.067: INFO: ss-1 Pending [] Nov 25 15:13:14.067: INFO: Nov 25 15:13:14.067: INFO: StatefulSet ss has not reached scale 3, at 2 Nov 25 15:13:15.153: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.909497526s Nov 25 15:13:16.235: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.822390165s Nov 25 15:13:17.356: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.740696342s Nov 25 15:13:18.484: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.620613602s Nov 25 15:13:19.566: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.491736207s Nov 25 15:13:20.674: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.409791437s Nov 25 15:13:21.770: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.301520722s Nov 25 15:13:22.850: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.205728835s Nov 25 15:13:23.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 126.112436ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5582 11/25/22 15:13:24.918 Nov 25 15:13:24.992: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:13:26.067: INFO: rc: 1 Nov 25 15:13:26.067: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 25 15:13:36.068: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:13:36.613: INFO: rc: 1 Nov 25 15:13:36.613: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 15:13:46.613: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:13:47.428: INFO: rc: 1 Nov 25 15:13:47.428: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 15:13:57.429: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:13:57.998: INFO: rc: 1 Nov 25 15:13:57.998: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 15:14:07.999: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:14:08.115: INFO: rc: 1 Nov 25 15:14:08.115: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:14:18.115: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:14:18.227: INFO: rc: 1 Nov 25 15:14:18.227: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:14:28.227: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:14:28.344: INFO: rc: 1 Nov 25 15:14:28.344: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:14:38.344: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:14:38.457: INFO: rc: 1 Nov 25 15:14:38.457: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:14:48.458: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:14:48.566: INFO: rc: 1 Nov 25 15:14:48.566: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:14:58.566: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:14:58.681: INFO: rc: 1 Nov 25 15:14:58.681: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:15:08.682: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:15:08.791: INFO: rc: 1 Nov 25 15:15:08.791: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:15:18.792: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:15:18.899: INFO: rc: 1 Nov 25 15:15:18.899: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:15:28.899: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:15:29.016: INFO: rc: 1 Nov 25 15:15:29.016: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:15:39.017: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:15:39.130: INFO: rc: 1 Nov 25 15:15:39.130: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:15:49.131: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:15:49.240: INFO: rc: 1 Nov 25 15:15:49.240: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:15:59.240: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:15:59.349: INFO: rc: 1 Nov 25 15:15:59.349: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:16:09.350: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:16:09.460: INFO: rc: 1 Nov 25 15:16:09.460: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:16:19.462: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:16:19.576: INFO: rc: 1 Nov 25 15:16:19.576: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:16:29.577: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:16:29.683: INFO: rc: 1 Nov 25 15:16:29.683: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:16:39.684: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:16:39.794: INFO: rc: 1 Nov 25 15:16:39.794: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:16:49.794: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:16:49.904: INFO: rc: 1 Nov 25 15:16:49.904: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:16:59.905: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:17:00.020: INFO: rc: 1 Nov 25 15:17:00.020: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:17:10.020: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:17:10.127: INFO: rc: 1 Nov 25 15:17:10.127: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:17:20.128: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:17:20.241: INFO: rc: 1 Nov 25 15:17:20.241: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:17:30.241: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:17:30.347: INFO: rc: 1 Nov 25 15:17:30.347: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:17:40.348: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:17:40.456: INFO: rc: 1 Nov 25 15:17:40.456: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:17:50.457: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:17:50.563: INFO: rc: 1 Nov 25 15:17:50.563: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 5m0.897s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 5m0.001s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5582 (Step Runtime: 4m26.837s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 2328 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc003921ac0, 0x10}, {0xc003921aac, 0x4}, {0xc0038e0d80, 0x38}, 0xc0038f7890?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc00290eea0?}, 0xc0023bde88?, {0xc0038e0d80, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc00290eea0}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d0d380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:18:00.563: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:18:00.668: INFO: rc: 1 Nov 25 15:18:00.668: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:18:10.668: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:18:10.781: INFO: rc: 1 Nov 25 15:18:10.781: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 5m20.898s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 5m20.003s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5582 (Step Runtime: 4m46.839s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 2328 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc003921ac0, 0x10}, {0xc003921aac, 0x4}, {0xc0038e0d80, 0x38}, 0xc0038f7890?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc00290eea0?}, 0xc0023bde88?, {0xc0038e0d80, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc00290eea0}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d0d380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:18:20.781: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:18:20.890: INFO: rc: 1 Nov 25 15:18:20.890: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:18:30.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5582 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 15:18:31.003: INFO: rc: 1 Nov 25 15:18:31.003: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Nov 25 15:18:31.043: INFO: Unexpected error: <*url.Error | 0xc0048a83c0>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/statefulset-5582/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar", Err: <*net.OpError | 0xc002333950>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00383e4e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001732620>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:18:31.043: FAIL: Get "https://34.82.189.151/api/v1/namespaces/statefulset-5582/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc00290eea0}, 0xc000ae0500) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc000500400}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc0023bddb8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc0023bde48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00290eea0?, 0xc0023bde88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00290eea0}, 0x3, 0x3, 0xc000ae0500) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 +0x3d0 E1125 15:18:31.043450 10232 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/framework/statefulset/rest.go", LineNumber:69, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc00290eea0}, 0xc000ae0500)\n\ttest/e2e/framework/statefulset/rest.go:69 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc000500400})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc0023bddb8?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc0023bde48?, 0x262a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00290eea0?, 0xc0023bde88?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00290eea0}, 0x3, 0x3, 0xc000ae0500)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)\n\ttest/e2e/framework/statefulset/wait.go:80\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.11()\n\ttest/e2e/apps/statefulset.go:719 +0x3d0", CustomMessage:""}} (�[1m�[38;5;9mYour Test Panicked�[0m �[38;5;243mtest/e2e/framework/statefulset/rest.go:69�[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. Alternatively, you may have made an assertion outside of a Ginkgo leaf node (e.g. in a container node or some out-of-band function) - please move your assertion to an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...). �[1mLearn more at:�[0m �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m ) goroutine 2328 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x70eb7e0?, 0xc000172700}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000172700?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x70eb7e0, 0xc000172700}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc001888e40, 0xb6}, {0xc001099640?, 0x75b521a?, 0xc001099660?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00137efd0, 0xa1}, {0xc0010996d8?, 0xc00137efd0?, 0xc001099700?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc0048a83c0}, {0x0?, 0xc003a7e0b0?, 0x10?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc00290eea0}, 0xc000ae0500) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc000500400}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc0023bddb8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc0023bde48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc00290eea0?, 0xc0023bde88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc00290eea0}, 0x3, 0x3, 0xc000ae0500) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 +0x3d0 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d0d380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 +0x1b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 +0x98 created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 +0xe3d [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Nov 25 15:18:31.083: INFO: Deleting all statefulset in ns statefulset-5582 Nov 25 15:18:31.122: INFO: Unexpected error: <*url.Error | 0xc0048a8900>: { Op: "Get", URL: "https://34.82.189.151/apis/apps/v1/namespaces/statefulset-5582/statefulsets", Err: <*net.OpError | 0xc002333d60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001892ea0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001732b40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:18:31.122: FAIL: Get "https://34.82.189.151/apis/apps/v1/namespaces/statefulset-5582/statefulsets": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x801de88, 0xc00290eea0}, {0xc0037b2120, 0x10}) test/e2e/framework/statefulset/rest.go:76 +0x113 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:129 +0x1b2 [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 25 15:18:31.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:18:31.162 STEP: Collecting events from namespace "statefulset-5582". 11/25/22 15:18:31.162 Nov 25 15:18:31.201: INFO: Unexpected error: failed to list events in namespace "statefulset-5582": <*url.Error | 0xc00383ea80>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/statefulset-5582/events", Err: <*net.OpError | 0xc00052af00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00383ea50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001324560>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:18:31.201: FAIL: failed to list events in namespace "statefulset-5582": Get "https://34.82.189.151/api/v1/namespaces/statefulset-5582/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0010985c0, {0xc0037b2120, 0x10}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc00290eea0}, {0xc0037b2120, 0x10}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001098650?, {0xc0037b2120?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00118e1e0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00173f490?, 0xc003413fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc000102be8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00173f490?, 0x29449fc?}, {0xae73300?, 0xc003413f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-5582" for this suite. 11/25/22 15:18:31.202 Nov 25 15:18:31.241: FAIL: Couldn't delete ns: "statefulset-5582": Delete "https://34.82.189.151/api/v1/namespaces/statefulset-5582": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/statefulset-5582", Err:(*net.OpError)(0xc001844910)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00118e1e0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00173f3e0?, 0xc003413fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00173f3e0?, 0x0?}, {0xae73300?, 0x5?, 0xc0038c85e8?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000c0a1e0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:01:54.645 Nov 25 15:01:54.645: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/25/22 15:01:54.647 Nov 25 15:01:54.686: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:01:56.726: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:01:58.726: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:00.726: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:02.725: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:04.726: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:06.726: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:08.726: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:10.726: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:12.727: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:14.726: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:16.726: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:18.726: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:20.726: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:22.725: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:24.725: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:24.764: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:24.764: INFO: Unexpected error: <*errors.errorString | 0xc0001fd960>: { s: "timed out waiting for the condition", } Nov 25 15:02:24.765: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000c0a1e0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 25 15:02:24.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:02:24.804 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\ssupport\sInClusterConfig\swith\stoken\srotation\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00113c3c0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:08:26.307 Nov 25 15:08:26.307: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename svcaccounts 11/25/22 15:08:26.309 Nov 25 15:08:26.348: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:28.388: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:30.388: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:32.388: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:34.389: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:36.388: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:38.388: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:40.388: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:42.388: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:44.388: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:46.388: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:48.388: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:50.388: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:52.389: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:54.387: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:56.388: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:56.428: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:56.428: INFO: Unexpected error: <*errors.errorString | 0xc00017da20>: { s: "timed out waiting for the condition", } Nov 25 15:08:56.428: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00113c3c0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 Nov 25 15:08:56.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:08:56.467 [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swith\s\-\-leave\-stdin\-open$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000e982d0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:08:56.294 Nov 25 15:08:56.294: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/25/22 15:08:56.296 Nov 25 15:08:56.336: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:58.376: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:00.376: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:02.376: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:04.376: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:06.375: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:08.375: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:10.376: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:12.375: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:14.375: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:16.375: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:18.375: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:20.375: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:22.375: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:24.376: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:26.375: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:26.415: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:26.415: INFO: Unexpected error: <*errors.errorString | 0xc000205cd0>: { s: "timed out waiting for the condition", } Nov 25 15:09:26.415: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000e982d0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 25 15:09:26.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:09:26.454 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never$'
test/e2e/kubectl/kubectl.go:415 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:07:55.816: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1726 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.82.189.151/api/v1/namespaces/kubectl-1726/pods/httpd": dial tcp 34.82.189.151:443: connect: connection refused error: exit status 1 In [AfterEach] at: test/e2e/framework/kubectl/builder.go:87 ---------- [FAILED] Nov 25 15:07:55.895: failed to list events in namespace "kubectl-1726": Get "https://34.82.189.151/api/v1/namespaces/kubectl-1726/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:07:55.935: Couldn't delete ns: "kubectl-1726": Delete "https://34.82.189.151/api/v1/namespaces/kubectl-1726": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/kubectl-1726", Err:(*net.OpError)(0xc002a745f0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:05:46.128 Nov 25 15:05:46.128: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/25/22 15:05:46.129 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:05:46.471 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:05:46.577 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/25/22 15:05:46.694 Nov 25 15:05:46.694: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1726 create -f -' Nov 25 15:05:47.508: INFO: stderr: "" Nov 25 15:05:47.508: INFO: stdout: "pod/httpd created\n" Nov 25 15:05:47.508: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 25 15:05:47.508: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-1726" to be "running and ready" Nov 25 15:05:47.616: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 108.546331ms Nov 25 15:05:47.616: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' to be 'Running' but was 'Pending' Nov 25 15:05:49.703: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194932851s Nov 25 15:05:49.703: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' to be 'Running' but was 'Pending' Nov 25 15:05:51.684: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176017565s Nov 25 15:05:51.684: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' to be 'Running' but was 'Pending' Nov 25 15:05:53.705: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.197455426s Nov 25 15:05:53.705: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' to be 'Running' but was 'Pending' Nov 25 15:05:55.684: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176398026s Nov 25 15:05:55.684: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' to be 'Running' but was 'Pending' Nov 25 15:05:57.716: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.208458463s Nov 25 15:05:57.716: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:05:59.681: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.173387971s Nov 25 15:05:59.681: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:01.758: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.249647366s Nov 25 15:06:01.758: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:03.759: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.251540752s Nov 25 15:06:03.759: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:05.830: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.322597844s Nov 25 15:06:05.831: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:07.758: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.250469547s Nov 25 15:06:07.758: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:09.681: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.172925118s Nov 25 15:06:09.681: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:11.685: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.177361401s Nov 25 15:06:11.685: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:13.672: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.164303654s Nov 25 15:06:13.672: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:15.713: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 28.20494704s Nov 25 15:06:15.713: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:17.674: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 30.166337864s Nov 25 15:06:17.674: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:19.736: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 32.228389415s Nov 25 15:06:19.736: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:21.711: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 34.203120052s Nov 25 15:06:21.711: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:23.673: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 36.165459132s Nov 25 15:06:23.673: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:25.708: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 38.1999609s Nov 25 15:06:25.708: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:27.716: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 40.207971584s Nov 25 15:06:27.716: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:29.670: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 42.162591821s Nov 25 15:06:29.671: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:31.681: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 44.172948298s Nov 25 15:06:31.681: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:33.701: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 46.192698012s Nov 25 15:06:33.701: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:35.690: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 48.182201148s Nov 25 15:06:35.690: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:37.680: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 50.172001233s Nov 25 15:06:37.680: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:39.702: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 52.193901514s Nov 25 15:06:39.702: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:41.668: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 54.159793741s Nov 25 15:06:41.668: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:43.688: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 56.180475391s Nov 25 15:06:43.688: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:45.708: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 58.199662188s Nov 25 15:06:45.708: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:47.694: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.185627891s Nov 25 15:06:47.694: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:49.767: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.258695845s Nov 25 15:06:49.767: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:51.680: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.171751833s Nov 25 15:06:51.680: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:54.002: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.494197031s Nov 25 15:06:54.002: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:55.671: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.162742051s Nov 25 15:06:55.671: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:57.712: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.203722502s Nov 25 15:06:57.712: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:06:59.678: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.170261033s Nov 25 15:06:59.678: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:01.686: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.177830973s Nov 25 15:07:01.686: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:03.697: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.189001906s Nov 25 15:07:03.697: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:05.713: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.204791995s Nov 25 15:07:05.713: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:07.689: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.181080636s Nov 25 15:07:07.689: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:09.688: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.179779655s Nov 25 15:07:09.688: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:11.673: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.164901613s Nov 25 15:07:11.673: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:13.706: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.197794467s Nov 25 15:07:13.706: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:15.678: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.170104839s Nov 25 15:07:15.678: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:17.683: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.175255346s Nov 25 15:07:17.683: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:19.694: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.185782309s Nov 25 15:07:19.694: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:21.675: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.167352376s Nov 25 15:07:21.675: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:23.675: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.167364375s Nov 25 15:07:23.675: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:25.726: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.217679713s Nov 25 15:07:25.726: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:27.700: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.192293771s Nov 25 15:07:27.700: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:29.688: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.179734055s Nov 25 15:07:29.688: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:31.684: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.176212603s Nov 25 15:07:31.684: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:33.668: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.16025365s Nov 25 15:07:33.668: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:35.677: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.168807885s Nov 25 15:07:35.677: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:37.688: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.180323878s Nov 25 15:07:37.688: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:39.677: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.169406137s Nov 25 15:07:39.677: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:41.673: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.165266913s Nov 25 15:07:41.673: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:43.683: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.175550995s Nov 25 15:07:43.684: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:45.828: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.319993269s Nov 25 15:07:45.828: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:47.689: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.181305065s Nov 25 15:07:47.689: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:49.742: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.23427058s Nov 25 15:07:49.742: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:51.671: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.163587134s Nov 25 15:07:51.672: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:53.707: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.199401787s Nov 25 15:07:53.707: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-xfgk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:06:28 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:47 +0000 UTC }] Nov 25 15:07:55.656: INFO: Encountered non-retryable error while getting pod kubectl-1726/httpd: Get "https://34.82.189.151/api/v1/namespaces/kubectl-1726/pods/httpd": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:07:55.656: INFO: Pod httpd failed to be running and ready. Nov 25 15:07:55.656: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [httpd] Nov 25 15:07:55.656: FAIL: Expected <bool>: false to equal <bool>: true Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/25/22 15:07:55.657 Nov 25 15:07:55.657: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1726 delete --grace-period=0 --force -f -' Nov 25 15:07:55.815: INFO: rc: 1 Nov 25 15:07:55.815: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc00051c4f0>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1726 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://34.82.189.151/api/v1/namespaces/kubectl-1726/pods/httpd\": dial tcp 34.82.189.151:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 25 15:07:55.816: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1726 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.82.189.151/api/v1/namespaces/kubectl-1726/pods/httpd": dial tcp 34.82.189.151:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc000b726e0?, 0x0?}, {0xc002e03ab0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc002e03ab0, 0xc}, {0xc00013fa20, 0x145}, {0xc000e03ec0?, 0x8?, 0x7f9767655a68?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc00013fa20, 0x145}, {0xc002e03ab0, 0xc}, {0xc00051c390, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 25 15:07:55.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:07:55.855 STEP: Collecting events from namespace "kubectl-1726". 11/25/22 15:07:55.856 Nov 25 15:07:55.895: INFO: Unexpected error: failed to list events in namespace "kubectl-1726": <*url.Error | 0xc002d930e0>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/kubectl-1726/events", Err: <*net.OpError | 0xc0036b8640>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002334540>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000faa4a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:07:55.895: FAIL: failed to list events in namespace "kubectl-1726": Get "https://34.82.189.151/api/v1/namespaces/kubectl-1726/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0018105c0, {0xc002e03ab0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002a17ba0}, {0xc002e03ab0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001810650?, {0xc002e03ab0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000e982d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc002e5a3d0?, 0xc0013fafb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0013d3dc8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc002e5a3d0?, 0x29449fc?}, {0xae73300?, 0xc0013faf80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-1726" for this suite. 11/25/22 15:07:55.896 Nov 25 15:07:55.935: FAIL: Couldn't delete ns: "kubectl-1726": Delete "https://34.82.189.151/api/v1/namespaces/kubectl-1726": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/kubectl-1726", Err:(*net.OpError)(0xc002a745f0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000e982d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc002e5a350?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc002e5a350?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never\,\sbut\swith\s\-\-rm$'
test/e2e/framework/kubectl/builder.go:87 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc002402000?, 0x0?}, {0xc001e106a0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc001e106a0, 0xc}, {0xc003bee840, 0x145}, {0xc00328bec0?, 0x8?, 0x7f3d423435b8?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc003bee840, 0x145}, {0xc001e106a0, 0xc}, {0xc0019b81f0, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:07:56.215: failed to list events in namespace "kubectl-4487": Get "https://34.82.189.151/api/v1/namespaces/kubectl-4487/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:07:56.255: Couldn't delete ns: "kubectl-4487": Delete "https://34.82.189.151/api/v1/namespaces/kubectl-4487": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/kubectl-4487", Err:(*net.OpError)(0xc002836410)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:05:04.901 Nov 25 15:05:04.901: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/25/22 15:05:04.91 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:05:05.21 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:05:05.384 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/25/22 15:05:05.616 Nov 25 15:05:05.617: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4487 create -f -' Nov 25 15:05:06.552: INFO: stderr: "" Nov 25 15:05:06.552: INFO: stdout: "pod/httpd created\n" Nov 25 15:05:06.552: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 25 15:05:06.552: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-4487" to be "running and ready" Nov 25 15:05:06.622: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 70.204564ms Nov 25 15:05:06.622: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-nfrc' to be 'Running' but was 'Pending' Nov 25 15:05:08.741: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189117813s Nov 25 15:05:08.741: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-nfrc' to be 'Running' but was 'Pending' Nov 25 15:05:10.676: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124411487s Nov 25 15:05:10.676: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-nfrc' to be 'Running' but was 'Pending' Nov 25 15:05:12.705: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.153057466s Nov 25 15:05:12.705: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-nfrc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:06 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:06 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:06 +0000 UTC }] Nov 25 15:05:14.745: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.193177313s Nov 25 15:05:14.745: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-nfrc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:06 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:06 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:06 +0000 UTC }] Nov 25 15:05:16.713: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.161256202s Nov 25 15:05:16.713: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-nfrc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:06 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:06 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:05:06 +0000 UTC }] Nov 25 15:05:18.697: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.145700395s Nov 25 15:05:18.697: INFO: Pod "httpd" satisfied condition "running and ready" Nov 25 15:05:18.697: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] [Slow] running a failing command without --restart=Never, but with --rm test/e2e/kubectl/kubectl.go:571 Nov 25 15:05:18.697: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4487 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=OnFailure --rm --pod-running-timeout=2m0s failure-3 -- /bin/sh -c cat && exit 42' Nov 25 15:07:27.744: INFO: rc: 1 Nov 25 15:07:27.744: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:27.899: INFO: Pod failure-3 still exists Nov 25 15:07:29.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:30.006: INFO: Pod failure-3 still exists Nov 25 15:07:31.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:31.971: INFO: Pod failure-3 still exists Nov 25 15:07:33.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:33.963: INFO: Pod failure-3 still exists Nov 25 15:07:35.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:35.964: INFO: Pod failure-3 still exists Nov 25 15:07:37.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:38.024: INFO: Pod failure-3 still exists Nov 25 15:07:39.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:39.973: INFO: Pod failure-3 still exists Nov 25 15:07:41.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:41.958: INFO: Pod failure-3 still exists Nov 25 15:07:43.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:43.969: INFO: Pod failure-3 still exists Nov 25 15:07:45.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:45.959: INFO: Pod failure-3 still exists Nov 25 15:07:47.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:47.968: INFO: Pod failure-3 still exists Nov 25 15:07:49.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:49.965: INFO: Pod failure-3 still exists Nov 25 15:07:51.901: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:51.961: INFO: Pod failure-3 still exists Nov 25 15:07:53.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:53.955: INFO: Pod failure-3 still exists Nov 25 15:07:55.900: INFO: Waiting for pod failure-3 to disappear Nov 25 15:07:55.941: INFO: Encountered non-retryable error while listing pods: Get "https://34.82.189.151/api/v1/namespaces/kubectl-4487/pods": dial tcp 34.82.189.151:443: connect: connection refused [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/25/22 15:07:55.941 Nov 25 15:07:55.941: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4487 delete --grace-period=0 --force -f -' Nov 25 15:07:56.135: INFO: rc: 1 Nov 25 15:07:56.135: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc0019b8430>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4487 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://34.82.189.151/api/v1/namespaces/kubectl-4487/pods/httpd\": dial tcp 34.82.189.151:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 25 15:07:56.135: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4487 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.82.189.151/api/v1/namespaces/kubectl-4487/pods/httpd": dial tcp 34.82.189.151:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc002402000?, 0x0?}, {0xc001e106a0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc001e106a0, 0xc}, {0xc003bee840, 0x145}, {0xc00328bec0?, 0x8?, 0x7f3d423435b8?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc003bee840, 0x145}, {0xc001e106a0, 0xc}, {0xc0019b81f0, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 25 15:07:56.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:07:56.174 STEP: Collecting events from namespace "kubectl-4487". 11/25/22 15:07:56.175 Nov 25 15:07:56.215: INFO: Unexpected error: failed to list events in namespace "kubectl-4487": <*url.Error | 0xc00287e000>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/kubectl-4487/events", Err: <*net.OpError | 0xc00222c000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003d312f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001048000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:07:56.215: FAIL: failed to list events in namespace "kubectl-4487": Get "https://34.82.189.151/api/v1/namespaces/kubectl-4487/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0013ae5c0, {0xc001e106a0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0022b8680}, {0xc001e106a0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0013ae650?, {0xc001e106a0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000c162d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc004e875e0?, 0xc004ee1fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0021da228?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004e875e0?, 0x29449fc?}, {0xae73300?, 0xc004ee1f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-4487" for this suite. 11/25/22 15:07:56.215 Nov 25 15:07:56.255: FAIL: Couldn't delete ns: "kubectl-4487": Delete "https://34.82.189.151/api/v1/namespaces/kubectl-4487": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/kubectl-4487", Err:(*net.OpError)(0xc002836410)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000c162d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc004e87560?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004e87560?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sAddon\supdate\sshould\spropagate\sadd\-on\sfile\schanges\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001245d10) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-cloud-provider-gcp] Addon update set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:08:59.185 Nov 25 15:08:59.185: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename addon-update-test 11/25/22 15:08:59.187 Nov 25 15:08:59.228: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:01.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:03.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:05.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:07.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:09.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:11.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:13.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:15.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:17.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:19.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:21.267: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:23.267: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:25.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:27.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:29.268: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:29.307: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:09:29.307: INFO: Unexpected error: <*errors.errorString | 0xc0002498b0>: { s: "timed out waiting for the condition", } Nov 25 15:09:29.307: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001245d10) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cloud-provider-gcp] Addon update test/e2e/framework/node/init/init.go:32 Nov 25 15:09:29.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-cloud-provider-gcp] Addon update test/e2e/cloud/gcp/addon_update.go:237 [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:09:29.347 [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\shandle\supdates\sto\sExternalTrafficPolicy\sfield$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000eef770) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:14:02.738 Nov 25 15:14:02.738: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 15:14:02.74 Nov 25 15:14:02.779: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:04.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:06.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:08.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:10.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:12.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:14.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:16.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:18.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:20.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:22.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:24.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:26.819: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:28.819: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:30.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:32.818: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:32.857: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:32.857: INFO: Unexpected error: <*errors.errorString | 0xc0001fda10>: { s: "timed out waiting for the condition", } Nov 25 15:14:32.857: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000eef770) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 15:14:32.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready ERROR: get pod list in csi-mock-volumes-7090-1652: Get "https://34.82.189.151/api/v1/namespaces/csi-mock-volumes-7090-1652/pods": dial tcp 34.82.189.151:443: connect: connection refused [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:14:32.897 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\sonly\starget\snodes\swith\sendpoints$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011820e0, {0x75c6f7c, 0x9}, 0xc001ced1d0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011820e0, 0x7f548c03ceb0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011820e0, 0x3b?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000da2000, {0x0, 0x0, 0xc0015f2720?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:07:57.234: failed to list events in namespace "esipp-219": Get "https://34.82.189.151/api/v1/namespaces/esipp-219/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:07:57.274: Couldn't delete ns: "esipp-219": Delete "https://34.82.189.151/api/v1/namespaces/esipp-219": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/esipp-219", Err:(*net.OpError)(0xc001f473b0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:05:06.838 Nov 25 15:05:06.838: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 15:05:06.84 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:05:07.067 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:05:07.178 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should only target nodes with endpoints test/e2e/network/loadbalancer.go:1346 STEP: creating a service esipp-219/external-local-nodes with type=LoadBalancer 11/25/22 15:05:07.536 STEP: setting ExternalTrafficPolicy=Local 11/25/22 15:05:07.536 STEP: waiting for loadbalancer for service esipp-219/external-local-nodes 11/25/22 15:05:07.871 Nov 25 15:05:07.871: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: waiting for loadbalancer for service esipp-219/external-local-nodes 11/25/22 15:06:32.049 Nov 25 15:06:32.049: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: Performing setup for networking test in namespace esipp-219 11/25/22 15:06:32.154 STEP: creating a selector 11/25/22 15:06:32.154 STEP: Creating the service pods in kubernetes 11/25/22 15:06:32.154 Nov 25 15:06:32.154: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 15:06:32.841: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-219" to be "running and ready" Nov 25 15:06:32.913: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 72.04203ms Nov 25 15:06:32.913: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:35.004: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162779479s Nov 25 15:06:35.004: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:36.970: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128622299s Nov 25 15:06:36.970: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:38.971: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130130451s Nov 25 15:06:38.971: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:41.062: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.220914509s Nov 25 15:06:41.062: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:42.968: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.127021399s Nov 25 15:06:42.968: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:44.998: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.156912163s Nov 25 15:06:44.998: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:46.973: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.131996642s Nov 25 15:06:46.973: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:48.967: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.125369387s Nov 25 15:06:48.967: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:50.969: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.127703085s Nov 25 15:06:50.969: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:52.991: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.149871454s Nov 25 15:06:52.991: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:54.986: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.144399506s Nov 25 15:06:54.986: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:56.972: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.130722948s Nov 25 15:06:56.972: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:06:58.997: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.155813051s Nov 25 15:06:58.997: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:00.981: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.140006409s Nov 25 15:07:00.981: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:03.056: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.214894432s Nov 25 15:07:03.056: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:05.003: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.161982547s Nov 25 15:07:05.003: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:07.059: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.21788529s Nov 25 15:07:07.059: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:09.004: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.162722775s Nov 25 15:07:09.004: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:11.049: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 38.207909413s Nov 25 15:07:11.049: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:12.973: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 40.131943326s Nov 25 15:07:12.973: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:15.020: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 42.178748436s Nov 25 15:07:15.020: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:16.967: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 44.126168s Nov 25 15:07:16.967: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:19.061: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 46.219601241s Nov 25 15:07:19.061: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:20.991: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 48.149374866s Nov 25 15:07:20.991: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:23.001: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 50.159776303s Nov 25 15:07:23.001: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:24.982: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 52.140650373s Nov 25 15:07:24.982: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:27.000: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 54.15914873s Nov 25 15:07:27.000: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:29.000: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 56.158519465s Nov 25 15:07:29.000: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:31.006: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 58.164374048s Nov 25 15:07:31.006: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:32.976: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.134746573s Nov 25 15:07:32.976: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:34.974: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.132399391s Nov 25 15:07:34.974: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:36.974: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.132978052s Nov 25 15:07:36.974: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:39.013: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.171844078s Nov 25 15:07:39.013: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:40.976: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.135197629s Nov 25 15:07:40.977: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:43.014: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.172314122s Nov 25 15:07:43.014: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:44.999: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.157506756s Nov 25 15:07:44.999: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:46.977: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.135858349s Nov 25 15:07:46.977: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:48.981: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.139905688s Nov 25 15:07:48.981: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:50.997: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.155997391s Nov 25 15:07:50.997: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:52.986: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.144504862s Nov 25 15:07:52.986: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:55.054: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.212848151s Nov 25 15:07:55.054: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:56.953: INFO: Encountered non-retryable error while getting pod esipp-219/netserver-0: Get "https://34.82.189.151/api/v1/namespaces/esipp-219/pods/netserver-0": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:07:56.953: INFO: Unexpected error: <*fmt.wrapError | 0xc00299f240>: { msg: "error while waiting for pod esipp-219/netserver-0 to be running and ready: Get \"https://34.82.189.151/api/v1/namespaces/esipp-219/pods/netserver-0\": dial tcp 34.82.189.151:443: connect: connection refused", err: <*url.Error | 0xc0038265a0>{ Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/esipp-219/pods/netserver-0", Err: <*net.OpError | 0xc00149b310>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001f30d20>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00299f200>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 25 15:07:56.953: FAIL: error while waiting for pod esipp-219/netserver-0 to be running and ready: Get "https://34.82.189.151/api/v1/namespaces/esipp-219/pods/netserver-0": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011820e0, {0x75c6f7c, 0x9}, 0xc001ced1d0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011820e0, 0x7f548c03ceb0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011820e0, 0x3b?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000da2000, {0x0, 0x0, 0xc0015f2720?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 Nov 25 15:07:56.993: INFO: Unexpected error: <*errors.errorString | 0xc000ec3db0>: { s: "failed to get Service \"external-local-nodes\": Get \"https://34.82.189.151/api/v1/namespaces/esipp-219/services/external-local-nodes\": dial tcp 34.82.189.151:443: connect: connection refused", } Nov 25 15:07:56.993: FAIL: failed to get Service "external-local-nodes": Get "https://34.82.189.151/api/v1/namespaces/esipp-219/services/external-local-nodes": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.5.2() test/e2e/network/loadbalancer.go:1366 +0xae panic({0x70eb7e0, 0xc000459880}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0019ac000, 0xcc}, {0xc000d6f700?, 0xc0019ac000?, 0xc000d6f728?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3f20, 0xc00299f240}, {0x0?, 0xc0027ddac0?, 0xc003513820?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0011820e0, {0x75c6f7c, 0x9}, 0xc001ced1d0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0011820e0, 0x7f548c03ceb0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0011820e0, 0x3b?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000da2000, {0x0, 0x0, 0xc0015f2720?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 15:07:56.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 15:07:57.033: INFO: Output of kubectl describe svc: Nov 25 15:07:57.033: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=esipp-219 describe svc --namespace=esipp-219' Nov 25 15:07:57.193: INFO: rc: 1 Nov 25 15:07:57.193: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:07:57.194 STEP: Collecting events from namespace "esipp-219". 11/25/22 15:07:57.194 Nov 25 15:07:57.234: INFO: Unexpected error: failed to list events in namespace "esipp-219": <*url.Error | 0xc001f30f00>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/esipp-219/events", Err: <*net.OpError | 0xc003822190>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001ec2f30>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000e30080>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:07:57.234: FAIL: failed to list events in namespace "esipp-219": Get "https://34.82.189.151/api/v1/namespaces/esipp-219/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc000d6a5c0, {0xc0027ddac0, 0x9}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002013380}, {0xc0027ddac0, 0x9}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc000d6a650?, {0xc0027ddac0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000da2000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00165a120?, 0xc0034dbf50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00165a120?, 0x7fadfa0?}, {0xae73300?, 0xc0034dbf80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-219" for this suite. 11/25/22 15:07:57.234 Nov 25 15:07:57.274: FAIL: Couldn't delete ns: "esipp-219": Delete "https://34.82.189.151/api/v1/namespaces/esipp-219": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/esipp-219", Err:(*net.OpError)(0xc001f473b0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000da2000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00165a030?, 0xc003989f08?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc002c9a030?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00165a030?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=LoadBalancer$'
test/e2e/network/loadbalancer.go:1272 k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1272 +0xd8 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:28:57.465: failed to list events in namespace "esipp-725": Get "https://34.82.189.151/api/v1/namespaces/esipp-725/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:28:57.505: Couldn't delete ns: "esipp-725": Delete "https://34.82.189.151/api/v1/namespaces/esipp-725": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/esipp-725", Err:(*net.OpError)(0xc004f6b450)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:13:56.082 Nov 25 15:13:56.082: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 15:13:56.084 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:13:56.399 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:13:56.516 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=LoadBalancer test/e2e/network/loadbalancer.go:1266 STEP: creating a service esipp-725/external-local-lb with type=LoadBalancer 11/25/22 15:13:56.785 STEP: setting ExternalTrafficPolicy=Local 11/25/22 15:13:56.785 STEP: waiting for loadbalancer for service esipp-725/external-local-lb 11/25/22 15:13:57.127 Nov 25 15:13:57.127: INFO: Waiting up to 15m0s for service "external-local-lb" to have a LoadBalancer Nov 25 15:14:01.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:03.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:05.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:07.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:09.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:11.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:13.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:15.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:17.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:19.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:21.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:23.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:25.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:27.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:29.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:31.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:33.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:35.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:37.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:39.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:41.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:43.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:45.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:47.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:49.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:51.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:53.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:55.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:57.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:59.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:01.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:03.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:05.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:07.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:09.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:11.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:13.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:15.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:17.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:19.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:21.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:23.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:25.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:27.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:29.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:31.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:33.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:35.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:37.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:39.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:41.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:43.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:45.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:47.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:49.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:51.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:53.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:55.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:57.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:15:59.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:01.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:03.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:05.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:07.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:09.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:11.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:13.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:15.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:17.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:19.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:21.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:23.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:25.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:27.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:29.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:31.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:33.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:35.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:37.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:39.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:41.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:43.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:45.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:47.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:49.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:51.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:53.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:55.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:57.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:16:59.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:01.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:03.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:05.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:07.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:09.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:11.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:13.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:15.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:17.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:19.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:21.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:23.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:25.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:27.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:29.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:31.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:33.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:35.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:37.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:39.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:41.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:43.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:45.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:47.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:49.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:51.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:53.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:55.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:57.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:17:59.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:01.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:03.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:05.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:07.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:09.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:11.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:13.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:15.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:17.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:19.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:21.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:23.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:25.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:27.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:29.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:31.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:33.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:35.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:37.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:39.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:41.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:43.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:45.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:47.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:49.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:51.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:53.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:55.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 5m0.703s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 4m59.659s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:18:57.237: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:18:59.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:01.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:03.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:05.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:07.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:09.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:11.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:13.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:15.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 5m20.708s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m20.005s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 5m19.663s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:19:17.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:19.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:21.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:23.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:25.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:27.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:29.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:19:31.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 5m40.709s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m40.007s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 5m39.665s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 6m0.712s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 6m0.01s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 5m59.668s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 6m20.714s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 6m20.012s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 6m19.67s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 6m40.717s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 6m40.014s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 6m39.672s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 7m0.718s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 7m0.016s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 6m59.674s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 7m20.721s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 7m20.018s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 7m19.676s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 7m40.724s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 7m40.021s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 7m39.679s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 8m0.725s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 8m0.023s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 7m59.681s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 8m20.728s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 8m20.026s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 8m19.684s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 8m40.73s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 8m40.028s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 8m39.686s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 9m0.732s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 9m0.03s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 8m59.688s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 9m20.735s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 9m20.033s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 9m19.691s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 9m40.737s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 9m40.035s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 9m39.693s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 10m0.74s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 10m0.037s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 9m59.695s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 10m20.742s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 10m20.039s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 10m19.697s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 10m40.744s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 10m40.041s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 10m39.7s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:24:53.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:24:55.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 11m0.746s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 11m0.044s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 10m59.702s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:24:57.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:24:59.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:01.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:03.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:05.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:07.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:09.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:11.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:13.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:15.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 11m20.748s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 11m20.046s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 11m19.704s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:25:17.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:19.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:21.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:23.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:25.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:27.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:29.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:31.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:33.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:35.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 11m40.75s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 11m40.048s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 11m39.706s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:25:37.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:39.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:41.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:43.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:45.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:47.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:49.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:51.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:53.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:55.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 12m0.753s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 12m0.051s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 11m59.709s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:25:57.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:25:59.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:01.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:03.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:05.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:07.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:09.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:11.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:13.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:15.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 12m20.756s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 12m20.053s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 12m19.711s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:26:17.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:19.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:21.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:23.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:25.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:27.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:29.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:31.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:33.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:35.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 12m40.758s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 12m40.055s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 12m39.714s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:26:37.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:39.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:41.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:43.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:45.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:47.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:49.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:51.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:53.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:55.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 13m0.76s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 13m0.057s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 12m59.715s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:26:57.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:26:59.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:01.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:03.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:05.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:07.237: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:09.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:11.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:13.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:15.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 13m20.762s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 13m20.059s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 13m19.718s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:27:17.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:19.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:21.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:23.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:25.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:27.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:29.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:31.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:33.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:35.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 13m40.765s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 13m40.063s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 13m39.721s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:27:37.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:39.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:41.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:43.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:45.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:47.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:49.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:51.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:53.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:55.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 14m0.77s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 14m0.067s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 13m59.726s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:27:57.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:27:59.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:01.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:03.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:05.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:07.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:09.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:11.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:13.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:15.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 14m20.772s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 14m20.07s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 14m19.728s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:28:17.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:19.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:21.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:23.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:25.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:27.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:29.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:31.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:33.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:35.235: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 14m40.775s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 14m40.072s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 14m39.731s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:28:37.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:39.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:41.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:43.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:45.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:47.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:49.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:51.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:53.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:55.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 15m0.777s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 15m0.075s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-725/external-local-lb (Step Runtime: 14m59.733s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2264 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc004be5dd0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0055d6960?, 0xc002b03b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00561a7d0?, 0x7fa7740?, 0xc00024a400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0055825a0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0055825a0, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0055825a0, 0x6aba880?, 0xc002b03e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0055825a0, 0xc002584ea0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00538c480, 0xc0053253e0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:28:57.236: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:57.276: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.82.189.151/api/v1/namespaces/esipp-725/services/external-local-lb": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:28:57.276: INFO: Unexpected error: <*fmt.wrapError | 0xc0011d9b20>: { msg: "timed out waiting for service \"external-local-lb\" to have a load balancer: timed out waiting for the condition", err: <*errors.errorString | 0xc0002498b0>{ s: "timed out waiting for the condition", }, } Nov 25 15:28:57.276: FAIL: timed out waiting for service "external-local-lb" to have a load balancer: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1272 +0xd8 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 15:28:57.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 15:28:57.316: INFO: Output of kubectl describe svc: Nov 25 15:28:57.316: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=esipp-725 describe svc --namespace=esipp-725' Nov 25 15:28:57.425: INFO: rc: 1 Nov 25 15:28:57.425: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:28:57.425 STEP: Collecting events from namespace "esipp-725". 11/25/22 15:28:57.426 Nov 25 15:28:57.465: INFO: Unexpected error: failed to list events in namespace "esipp-725": <*url.Error | 0xc0031241e0>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/esipp-725/events", Err: <*net.OpError | 0xc003bf91d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003682660>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0001d1b60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:28:57.465: FAIL: failed to list events in namespace "esipp-725": Get "https://34.82.189.151/api/v1/namespaces/esipp-725/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0018ca5c0, {0xc0054da800, 0x9}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002584ea0}, {0xc0054da800, 0x9}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0018ca650?, {0xc0054da800?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc001304000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0049d99e0?, 0xc001192f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc001192f40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0049d99e0?, 0x2622c40?}, {0xae73300?, 0xc001192f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-725" for this suite. 11/25/22 15:28:57.466 Nov 25 15:28:57.505: FAIL: Couldn't delete ns: "esipp-725": Delete "https://34.82.189.151/api/v1/namespaces/esipp-725": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/esipp-725", Err:(*net.OpError)(0xc004f6b450)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001304000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0049d9960?, 0xc003544fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0049d9960?, 0x0?}, {0xae73300?, 0x5?, 0xc00333b8c0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=NodePort$'
test/e2e/framework/pod/exec_util.go:126 k8s.io/kubernetes/test/e2e/framework/pod.execCommandInPodWithFullOutput(0x7775853?, {0xc00351c198, 0x12}, {0xc001f55a40, 0x3, 0x3}) test/e2e/framework/pod/exec_util.go:126 +0x133 k8s.io/kubernetes/test/e2e/framework/pod.ExecShellInPodWithFullOutput(...) test/e2e/framework/pod/exec_util.go:138 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromContainer(0xc00026ce00, {0x75b767e, 0x4}, {0x75c2d47, 0x8}, {0xc001df6710, 0xb}, {0xc001e779f0, 0xa}, 0x2378, ...) test/e2e/framework/network/utils.go:396 +0x32a k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromTestContainer(...) test/e2e/framework/network/utils.go:411 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer.func1() test/e2e/network/util.go:62 +0x91 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00351c5d0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x58?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001490768?, 0xc001c67da8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x0?, 0xc0d854cd99cff0b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer(0xc00026ce00, {0xc001e779f0, 0xa}, 0x7b3b, 0x4?, {0x75c2d47, 0x8}) test/e2e/network/util.go:69 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1336 +0x2dc There were additional failures detected after the initial failure: [FAILED] Nov 25 15:14:01.092: failed to list events in namespace "esipp-4917": Get "https://34.82.189.151/api/v1/namespaces/esipp-4917/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:14:01.132: Couldn't delete ns: "esipp-4917": Delete "https://34.82.189.151/api/v1/namespaces/esipp-4917": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/esipp-4917", Err:(*net.OpError)(0xc003e58000)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:12:20.029 Nov 25 15:12:20.029: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 15:12:20.031 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:12:20.357 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:12:20.443 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=NodePort test/e2e/network/loadbalancer.go:1314 STEP: creating a service esipp-4917/external-local-nodeport with type=NodePort and ExternalTrafficPolicy=Local 11/25/22 15:12:20.774 STEP: creating a pod to be part of the service external-local-nodeport 11/25/22 15:12:21.035 Nov 25 15:12:21.120: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 15:12:21.182: INFO: Found all 1 pods Nov 25 15:12:21.182: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-nodeport-n7cqq] Nov 25 15:12:21.182: INFO: Waiting up to 2m0s for pod "external-local-nodeport-n7cqq" in namespace "esipp-4917" to be "running and ready" Nov 25 15:12:21.266: INFO: Pod "external-local-nodeport-n7cqq": Phase="Pending", Reason="", readiness=false. Elapsed: 84.089283ms Nov 25 15:12:21.266: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-n7cqq' on 'bootstrap-e2e-minion-group-xfgk' to be 'Running' but was 'Pending' Nov 25 15:12:23.389: INFO: Pod "external-local-nodeport-n7cqq": Phase="Running", Reason="", readiness=true. Elapsed: 2.206640702s Nov 25 15:12:23.389: INFO: Pod "external-local-nodeport-n7cqq" satisfied condition "running and ready" Nov 25 15:12:23.389: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-nodeport-n7cqq] STEP: Performing setup for networking test in namespace esipp-4917 11/25/22 15:12:24.513 STEP: creating a selector 11/25/22 15:12:24.513 STEP: Creating the service pods in kubernetes 11/25/22 15:12:24.513 Nov 25 15:12:24.513: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 15:12:24.837: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-4917" to be "running and ready" Nov 25 15:12:24.933: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 96.250672ms Nov 25 15:12:24.933: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:12:27.008: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.170761624s Nov 25 15:12:27.008: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:28.990: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.152686618s Nov 25 15:12:28.990: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:31.278: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.440777766s Nov 25 15:12:31.278: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:32.995: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.158047043s Nov 25 15:12:32.995: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:35.048: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.210984053s Nov 25 15:12:35.048: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:36.988: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.151262773s Nov 25 15:12:36.988: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:39.056: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.218730596s Nov 25 15:12:39.056: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:41.055: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.218434099s Nov 25 15:12:41.055: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:43.013: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.175748043s Nov 25 15:12:43.013: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:45.009: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.172618403s Nov 25 15:12:45.010: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:47.020: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.183404281s Nov 25 15:12:47.020: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:49.067: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.230102607s Nov 25 15:12:49.067: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:51.057: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.220605478s Nov 25 15:12:51.057: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:53.008: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.170743456s Nov 25 15:12:53.008: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:55.090: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.252906711s Nov 25 15:12:55.090: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:56.995: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.158299302s Nov 25 15:12:56.995: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:12:59.105: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.26849484s Nov 25 15:12:59.105: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:01.025: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.187669114s Nov 25 15:13:01.025: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:03.088: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.251221306s Nov 25 15:13:03.088: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:05.041: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.204160562s Nov 25 15:13:05.041: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:07.112: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.274867484s Nov 25 15:13:07.112: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:09.078: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.240869026s Nov 25 15:13:09.078: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:10.995: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.158014065s Nov 25 15:13:10.995: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:13.107: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.269806806s Nov 25 15:13:13.107: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:15.012: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.17560419s Nov 25 15:13:15.012: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:17.048: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.210844869s Nov 25 15:13:17.048: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:18.992: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.155069017s Nov 25 15:13:18.992: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:21.053: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.215982423s Nov 25 15:13:21.053: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:23.011: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.173647063s Nov 25 15:13:23.011: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:25.010: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.173381274s Nov 25 15:13:25.010: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:26.991: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.154124252s Nov 25 15:13:26.991: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:29.022: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.184847196s Nov 25 15:13:29.022: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:31.000: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.163370372s Nov 25 15:13:31.000: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:32.996: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.159242469s Nov 25 15:13:32.996: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:35.074: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.236993254s Nov 25 15:13:35.074: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:37.009: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.171678402s Nov 25 15:13:37.009: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:39.107: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.270135931s Nov 25 15:13:39.107: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:41.019: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.18242215s Nov 25 15:13:41.019: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:43.139: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.30196032s Nov 25 15:13:43.139: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:44.997: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.160060676s Nov 25 15:13:44.997: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:13:47.038: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m22.20138796s Nov 25 15:13:47.038: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 25 15:13:47.038: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 25 15:13:47.173: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-4917" to be "running and ready" Nov 25 15:13:47.408: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 234.969933ms Nov 25 15:13:47.408: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 25 15:13:47.408: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 25 15:13:47.569: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-4917" to be "running and ready" Nov 25 15:13:47.692: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 123.004249ms Nov 25 15:13:47.692: INFO: The phase of Pod netserver-2 is Running (Ready = true) Nov 25 15:13:47.692: INFO: Pod "netserver-2" satisfied condition "running and ready" STEP: Creating test pods 11/25/22 15:13:47.791 Nov 25 15:13:48.032: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "esipp-4917" to be "running" Nov 25 15:13:48.121: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 89.085779ms Nov 25 15:13:50.259: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226830363s Nov 25 15:13:52.209: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177537798s Nov 25 15:13:54.193: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.160703843s Nov 25 15:13:54.193: INFO: Pod "test-container-pod" satisfied condition "running" Nov 25 15:13:54.313: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 STEP: Getting node addresses 11/25/22 15:13:54.313 Nov 25 15:13:54.313: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes 11/25/22 15:13:54.5 Nov 25 15:13:55.589: INFO: Service node-port-service in namespace esipp-4917 found. Nov 25 15:13:56.107: INFO: Service session-affinity-service in namespace esipp-4917 found. STEP: Waiting for NodePort service to expose endpoint 11/25/22 15:13:56.197 Nov 25 15:13:57.198: INFO: Waiting for amount of service:node-port-service endpoints to be 3 STEP: Waiting for Session Affinity service to expose endpoint 11/25/22 15:13:57.334 Nov 25 15:13:58.335: INFO: Waiting for amount of service:session-affinity-service endpoints to be 3 STEP: reading clientIP using the TCP service's NodePort, on node bootstrap-e2e-minion-group-xfgk: 10.138.0.5:31547/clientip 11/25/22 15:13:58.433 Nov 25 15:13:58.504: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.3.176:9080/dial?request=clientip&protocol=http&host=10.138.0.5&port=31547&tries=1'] Namespace:esipp-4917 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 15:13:58.504: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 15:13:58.506: INFO: ExecWithOptions: Clientset creation Nov 25 15:13:58.506: INFO: ExecWithOptions: execute(POST https://34.82.189.151/api/v1/namespaces/esipp-4917/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.3.176%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.5%26port%3D31547%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 15:14:00.831: INFO: Unexpected error: failed to get pod test-container-pod: <*url.Error | 0xc00203af60>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/esipp-4917/pods/test-container-pod", Err: <*net.OpError | 0xc003e62960>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001f55d40>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00131be60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:14:00.831: FAIL: failed to get pod test-container-pod: Get "https://34.82.189.151/api/v1/namespaces/esipp-4917/pods/test-container-pod": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.execCommandInPodWithFullOutput(0x7775853?, {0xc00351c198, 0x12}, {0xc001f55a40, 0x3, 0x3}) test/e2e/framework/pod/exec_util.go:126 +0x133 k8s.io/kubernetes/test/e2e/framework/pod.ExecShellInPodWithFullOutput(...) test/e2e/framework/pod/exec_util.go:138 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromContainer(0xc00026ce00, {0x75b767e, 0x4}, {0x75c2d47, 0x8}, {0xc001df6710, 0xb}, {0xc001e779f0, 0xa}, 0x2378, ...) test/e2e/framework/network/utils.go:396 +0x32a k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromTestContainer(...) test/e2e/framework/network/utils.go:411 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer.func1() test/e2e/network/util.go:62 +0x91 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00351c5d0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x58?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001490768?, 0xc001c67da8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x0?, 0xc0d854cd99cff0b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer(0xc00026ce00, {0xc001e779f0, 0xa}, 0x7b3b, 0x4?, {0x75c2d47, 0x8}) test/e2e/network/util.go:69 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1336 +0x2dc E1125 15:14:00.831940 10123 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/framework/pod/exec_util.go", LineNumber:126, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/pod.execCommandInPodWithFullOutput(0x7775853?, {0xc00351c198, 0x12}, {0xc001f55a40, 0x3, 0x3})\n\ttest/e2e/framework/pod/exec_util.go:126 +0x133\nk8s.io/kubernetes/test/e2e/framework/pod.ExecShellInPodWithFullOutput(...)\n\ttest/e2e/framework/pod/exec_util.go:138\nk8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromContainer(0xc00026ce00, {0x75b767e, 0x4}, {0x75c2d47, 0x8}, {0xc001df6710, 0xb}, {0xc001e779f0, 0xa}, 0x2378, ...)\n\ttest/e2e/framework/network/utils.go:396 +0x32a\nk8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromTestContainer(...)\n\ttest/e2e/framework/network/utils.go:411\nk8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer.func1()\n\ttest/e2e/network/util.go:62 +0x91\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x0?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00351c5d0, 0x2fdb16a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x58?, 0x2fd9d05?, 0x40?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001490768?, 0xc001c67da8?, 0x262a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x0?, 0xc0d854cd99cff0b6?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer(0xc00026ce00, {0xc001e779f0, 0xa}, 0x7b3b, 0x4?, {0x75c2d47, 0x8})\n\ttest/e2e/network/util.go:69 +0x125\nk8s.io/kubernetes/test/e2e/network.glob..func20.4()\n\ttest/e2e/network/loadbalancer.go:1336 +0x2dc", CustomMessage:""}} (�[1m�[38;5;9mYour Test Panicked�[0m �[38;5;243mtest/e2e/framework/pod/exec_util.go:126�[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. Alternatively, you may have made an assertion outside of a Ginkgo leaf node (e.g. in a container node or some out-of-band function) - please move your assertion to an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...). �[1mLearn more at:�[0m �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m ) goroutine 9645 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x70eb7e0?, 0xc000df5f10}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000df5f10?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x70eb7e0, 0xc000df5f10}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc0027c6340, 0xc4}, {0xc001d797e8?, 0x75b521a?, 0xc001d79808?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc004684420, 0xaf}, {0xc001d79880?, 0xc00290a510?, 0xc001d798a8?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc00203af60}, {0xc00131bea0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/pod.execCommandInPodWithFullOutput(0x7775853?, {0xc00351c198, 0x12}, {0xc001f55a40, 0x3, 0x3}) test/e2e/framework/pod/exec_util.go:126 +0x133 k8s.io/kubernetes/test/e2e/framework/pod.ExecShellInPodWithFullOutput(...) test/e2e/framework/pod/exec_util.go:138 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromContainer(0xc00026ce00, {0x75b767e, 0x4}, {0x75c2d47, 0x8}, {0xc001df6710, 0xb}, {0xc001e779f0, 0xa}, 0x2378, ...) test/e2e/framework/network/utils.go:396 +0x32a k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromTestContainer(...) test/e2e/framework/network/utils.go:411 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer.func1() test/e2e/network/util.go:62 +0x91 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00351c5d0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x58?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001490768?, 0xc001c67da8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x0?, 0xc0d854cd99cff0b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer(0xc00026ce00, {0xc001e779f0, 0xa}, 0x7b3b, 0x4?, {0x75c2d47, 0x8}) test/e2e/network/util.go:69 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1336 +0x2dc k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0024df800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 +0x1b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 +0x98 created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 +0xe3d Nov 25 15:14:00.871: INFO: Unexpected error: <*url.Error | 0xc0026f3e60>: { Op: "Delete", URL: "https://34.82.189.151/api/v1/namespaces/esipp-4917/services/external-local-nodeport", Err: <*net.OpError | 0xc00372cd20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00203b470>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00482b500>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:14:00.871: FAIL: Delete "https://34.82.189.151/api/v1/namespaces/esipp-4917/services/external-local-nodeport": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.4.1() test/e2e/network/loadbalancer.go:1323 +0xe7 panic({0x70eb7e0, 0xc000df5f10}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000df5f10?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd7 panic({0x70eb7e0, 0xc000df5f10}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc004684420, 0xaf}, {0xc001d79880?, 0xc00290a510?, 0xc001d798a8?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc00203af60}, {0xc00131bea0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/pod.execCommandInPodWithFullOutput(0x7775853?, {0xc00351c198, 0x12}, {0xc001f55a40, 0x3, 0x3}) test/e2e/framework/pod/exec_util.go:126 +0x133 k8s.io/kubernetes/test/e2e/framework/pod.ExecShellInPodWithFullOutput(...) test/e2e/framework/pod/exec_util.go:138 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromContainer(0xc00026ce00, {0x75b767e, 0x4}, {0x75c2d47, 0x8}, {0xc001df6710, 0xb}, {0xc001e779f0, 0xa}, 0x2378, ...) test/e2e/framework/network/utils.go:396 +0x32a k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromTestContainer(...) test/e2e/framework/network/utils.go:411 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer.func1() test/e2e/network/util.go:62 +0x91 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00351c5d0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x58?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001490768?, 0xc001c67da8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x0?, 0xc0d854cd99cff0b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer(0xc00026ce00, {0xc001e779f0, 0xa}, 0x7b3b, 0x4?, {0x75c2d47, 0x8}) test/e2e/network/util.go:69 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1336 +0x2dc [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 15:14:00.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 15:14:00.911: INFO: Output of kubectl describe svc: Nov 25 15:14:00.912: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=esipp-4917 describe svc --namespace=esipp-4917' Nov 25 15:14:01.052: INFO: rc: 1 Nov 25 15:14:01.052: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:14:01.052 STEP: Collecting events from namespace "esipp-4917". 11/25/22 15:14:01.052 Nov 25 15:14:01.091: INFO: Unexpected error: failed to list events in namespace "esipp-4917": <*url.Error | 0xc00357e480>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/esipp-4917/events", Err: <*net.OpError | 0xc00372cff0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0029765d0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00482b860>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:14:01.092: FAIL: failed to list events in namespace "esipp-4917": Get "https://34.82.189.151/api/v1/namespaces/esipp-4917/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc001c625c0, {0xc003e0f790, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc005348340}, {0xc003e0f790, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001c62650?, {0xc003e0f790?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0009de000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0006782d0?, 0xc00428df50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00428df40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0006782d0?, 0x2622c40?}, {0xae73300?, 0xc00428df80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-4917" for this suite. 11/25/22 15:14:01.092 Nov 25 15:14:01.132: FAIL: Couldn't delete ns: "esipp-4917": Delete "https://34.82.189.151/api/v1/namespaces/esipp-4917": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/esipp-4917", Err:(*net.OpError)(0xc003e58000)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0009de000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000678160?, 0xc004ef5fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000678160?, 0x0?}, {0xae73300?, 0x5?, 0xc0047ae4b0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfrom\spods$'
test/e2e/network/loadbalancer.go:1444 k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1444 +0x3c6 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:07:56.692: failed to list events in namespace "esipp-7026": Get "https://34.82.189.151/api/v1/namespaces/esipp-7026/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:07:56.732: Couldn't delete ns: "esipp-7026": Delete "https://34.82.189.151/api/v1/namespaces/esipp-7026": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/esipp-7026", Err:(*net.OpError)(0xc002940000)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:02:25.86 Nov 25 15:02:25.861: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 15:02:25.862 Nov 25 15:02:25.902: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:27.942: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:29.941: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:31.942: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:33.942: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:35.941: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:37.941: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:39.941: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:41.942: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:43.942: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:45.942: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:47.941: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:02:49.942: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:04:48.838 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:04:48.93 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work from pods test/e2e/network/loadbalancer.go:1422 STEP: creating a service esipp-7026/external-local-pods with type=LoadBalancer 11/25/22 15:04:49.166 STEP: setting ExternalTrafficPolicy=Local 11/25/22 15:04:49.166 STEP: waiting for loadbalancer for service esipp-7026/external-local-pods 11/25/22 15:04:49.252 Nov 25 15:04:49.252: INFO: Waiting up to 15m0s for service "external-local-pods" to have a LoadBalancer STEP: creating a pod to be part of the service external-local-pods 11/25/22 15:05:57.411 Nov 25 15:05:57.523: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 15:05:57.618: INFO: Found all 1 pods Nov 25 15:05:57.618: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-pods-frggb] Nov 25 15:05:57.618: INFO: Waiting up to 2m0s for pod "external-local-pods-frggb" in namespace "esipp-7026" to be "running and ready" Nov 25 15:05:57.716: INFO: Pod "external-local-pods-frggb": Phase="Pending", Reason="", readiness=false. Elapsed: 98.020183ms Nov 25 15:05:57.716: INFO: Error evaluating pod condition running and ready: want pod 'external-local-pods-frggb' on 'bootstrap-e2e-minion-group-nfrc' to be 'Running' but was 'Pending' Nov 25 15:05:59.802: INFO: Pod "external-local-pods-frggb": Phase="Running", Reason="", readiness=true. Elapsed: 2.184182435s Nov 25 15:05:59.802: INFO: Pod "external-local-pods-frggb" satisfied condition "running and ready" Nov 25 15:05:59.802: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-pods-frggb] STEP: waiting for loadbalancer for service esipp-7026/external-local-pods 11/25/22 15:05:59.802 Nov 25 15:05:59.802: INFO: Waiting up to 15m0s for service "external-local-pods" to have a LoadBalancer STEP: Creating pause pod deployment to make sure, pausePods are in desired state 11/25/22 15:05:59.914 Nov 25 15:06:00.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} Nov 25 15:06:02.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:04.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:06.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:08.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:10.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:12.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:14.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:16.437: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:18.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:20.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:22.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:24.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:26.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:28.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:30.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:32.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:34.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:36.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:38.478: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:40.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:42.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:44.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:46.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:48.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:50.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:52.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:54.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:56.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:06:58.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:00.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:02.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:04.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:06.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:08.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:10.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:12.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:14.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:16.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:18.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:20.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:22.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:24.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:26.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:28.484: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:30.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:32.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:34.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:36.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:38.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:40.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:42.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:44.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:46.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:48.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:50.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:52.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:54.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 25, 15, 6, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-deployment-57cbc6bc65\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 25 15:07:56.427: INFO: Unexpected error: Failed to complete pause pod deployment: <*errors.errorString | 0xc0012f38c0>: { s: "error waiting for deployment \"pause-pod-deployment\" status to match expectation: Get \"https://34.82.189.151/apis/apps/v1/namespaces/esipp-7026/deployments/pause-pod-deployment\": dial tcp 34.82.189.151:443: connect: connection refused", } Nov 25 15:07:56.427: FAIL: Failed to complete pause pod deployment: error waiting for deployment "pause-pod-deployment" status to match expectation: Get "https://34.82.189.151/apis/apps/v1/namespaces/esipp-7026/deployments/pause-pod-deployment": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1444 +0x3c6 Nov 25 15:07:56.466: INFO: Unexpected error: <*errors.errorString | 0xc0010bd630>: { s: "failed to get Service \"external-local-pods\": Get \"https://34.82.189.151/api/v1/namespaces/esipp-7026/services/external-local-pods\": dial tcp 34.82.189.151:443: connect: connection refused", } Nov 25 15:07:56.466: FAIL: failed to get Service "external-local-pods": Get "https://34.82.189.151/api/v1/namespaces/esipp-7026/services/external-local-pods": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.6.1() test/e2e/network/loadbalancer.go:1432 +0xae panic({0x70eb7e0, 0xc003ce3c70}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc001b62120, 0x112}, {0xc002a3fcb8?, 0xc00050e000?, 0xc002a3fce0?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3ee0, 0xc0012f38c0}, {0xc0012f38d0?, 0x78959b8?, 0xa?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1444 +0x3c6 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 15:07:56.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 15:07:56.506: INFO: Output of kubectl describe svc: Nov 25 15:07:56.506: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=esipp-7026 describe svc --namespace=esipp-7026' Nov 25 15:07:56.652: INFO: rc: 1 Nov 25 15:07:56.652: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:07:56.652 STEP: Collecting events from namespace "esipp-7026". 11/25/22 15:07:56.652 Nov 25 15:07:56.692: INFO: Unexpected error: failed to list events in namespace "esipp-7026": <*url.Error | 0xc003125c50>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/esipp-7026/events", Err: <*net.OpError | 0xc0033a2f50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002cee030>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0005f0580>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:07:56.692: FAIL: failed to list events in namespace "esipp-7026": Get "https://34.82.189.151/api/v1/namespaces/esipp-7026/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc000ee45c0, {0xc0032a4800, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc000f22000}, {0xc0032a4800, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc000ee4650?, {0xc0032a4800?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc001296000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000ff75a0?, 0xc000585f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc000585f40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000ff75a0?, 0x2622c40?}, {0xae73300?, 0xc000585f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-7026" for this suite. 11/25/22 15:07:56.692 Nov 25 15:07:56.732: FAIL: Couldn't delete ns: "esipp-7026": Delete "https://34.82.189.151/api/v1/namespaces/esipp-7026": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/esipp-7026", Err:(*net.OpError)(0xc002940000)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001296000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000ff74f0?, 0xc003d42fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000ff74f0?, 0x0?}, {0xae73300?, 0x5?, 0xc003be9200?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sTCP\sservice\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00127e4b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:07:56.082 Nov 25 15:07:56.082: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 15:07:56.084 Nov 25 15:07:56.123: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:07:58.163: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:00.163: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:02.162: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:04.163: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:06.163: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:08.163: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:10.163: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:12.163: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:14.162: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:16.162: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:18.163: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:20.162: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:22.162: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:24.164: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:26.162: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:26.202: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:26.202: INFO: Unexpected error: <*errors.errorString | 0xc0001fd9d0>: { s: "timed out waiting for the condition", } Nov 25 15:08:26.202: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00127e4b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 15:08:26.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:08:26.242 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sUDP\sservice\s\[Slow\]$'
test/e2e/network/service.go:604 k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0034e5210, 0xd}, 0x783b, 0x0?) test/e2e/network/service.go:604 +0x17b k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 +0xe09 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:18:53.870: failed to list events in namespace "loadbalancers-1724": Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-1724/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:18:53.911: Couldn't delete ns: "loadbalancers-1724": Delete "https://34.82.189.151/api/v1/namespaces/loadbalancers-1724": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/loadbalancers-1724", Err:(*net.OpError)(0xc004884000)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:11:59.472 Nov 25 15:11:59.472: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 15:11:59.474 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:12:00.142 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:12:00.405 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to change the type and ports of a UDP service [Slow] test/e2e/network/loadbalancer.go:287 Nov 25 15:12:01.000: INFO: namespace for TCP test: loadbalancers-1724 STEP: creating a UDP service mutability-test with type=ClusterIP in namespace loadbalancers-1724 11/25/22 15:12:01.156 Nov 25 15:12:01.327: INFO: service port UDP: 80 STEP: creating a pod to be part of the UDP service mutability-test 11/25/22 15:12:01.327 Nov 25 15:12:01.402: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 15:12:01.502: INFO: Found 0/1 pods - will retry Nov 25 15:12:03.597: INFO: Found all 1 pods Nov 25 15:12:03.597: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [mutability-test-5bzjg] Nov 25 15:12:03.597: INFO: Waiting up to 2m0s for pod "mutability-test-5bzjg" in namespace "loadbalancers-1724" to be "running and ready" Nov 25 15:12:03.684: INFO: Pod "mutability-test-5bzjg": Phase="Pending", Reason="", readiness=false. Elapsed: 87.565525ms Nov 25 15:12:03.684: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-5bzjg' on 'bootstrap-e2e-minion-group-nfrc' to be 'Running' but was 'Pending' Nov 25 15:12:05.850: INFO: Pod "mutability-test-5bzjg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253683946s Nov 25 15:12:05.850: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-5bzjg' on 'bootstrap-e2e-minion-group-nfrc' to be 'Running' but was 'Pending' Nov 25 15:12:07.792: INFO: Pod "mutability-test-5bzjg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195554199s Nov 25 15:12:07.792: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-5bzjg' on 'bootstrap-e2e-minion-group-nfrc' to be 'Running' but was 'Pending' Nov 25 15:12:09.763: INFO: Pod "mutability-test-5bzjg": Phase="Running", Reason="", readiness=false. Elapsed: 6.166637229s Nov 25 15:12:09.763: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-5bzjg' on 'bootstrap-e2e-minion-group-nfrc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:01 +0000 UTC }] Nov 25 15:12:11.745: INFO: Pod "mutability-test-5bzjg": Phase="Running", Reason="", readiness=false. Elapsed: 8.148164674s Nov 25 15:12:11.745: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-5bzjg' on 'bootstrap-e2e-minion-group-nfrc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:01 +0000 UTC }] Nov 25 15:12:13.870: INFO: Pod "mutability-test-5bzjg": Phase="Running", Reason="", readiness=false. Elapsed: 10.273477098s Nov 25 15:12:13.870: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-5bzjg' on 'bootstrap-e2e-minion-group-nfrc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:01 +0000 UTC }] Nov 25 15:12:15.805: INFO: Pod "mutability-test-5bzjg": Phase="Running", Reason="", readiness=false. Elapsed: 12.208265758s Nov 25 15:12:15.805: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-5bzjg' on 'bootstrap-e2e-minion-group-nfrc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:01 +0000 UTC }] Nov 25 15:12:17.771: INFO: Pod "mutability-test-5bzjg": Phase="Running", Reason="", readiness=false. Elapsed: 14.173915672s Nov 25 15:12:17.771: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-5bzjg' on 'bootstrap-e2e-minion-group-nfrc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:01 +0000 UTC }] Nov 25 15:12:19.771: INFO: Pod "mutability-test-5bzjg": Phase="Running", Reason="", readiness=false. Elapsed: 16.174157738s Nov 25 15:12:19.771: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-5bzjg' on 'bootstrap-e2e-minion-group-nfrc' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:12:01 +0000 UTC }] Nov 25 15:12:21.762: INFO: Pod "mutability-test-5bzjg": Phase="Running", Reason="", readiness=true. Elapsed: 18.165774095s Nov 25 15:12:21.763: INFO: Pod "mutability-test-5bzjg" satisfied condition "running and ready" Nov 25 15:12:21.763: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [mutability-test-5bzjg] STEP: changing the UDP service to type=NodePort 11/25/22 15:12:21.763 Nov 25 15:12:21.966: INFO: UDP node port: 30778 STEP: hitting the UDP service's NodePort 11/25/22 15:12:21.966 Nov 25 15:12:21.966: INFO: Poking udp://34.82.154.188:30778 Nov 25 15:12:22.007: INFO: Poke("udp://34.82.154.188:30778"): read udp 10.60.68.165:52289->34.82.154.188:30778: read: connection refused Nov 25 15:12:24.007: INFO: Poking udp://34.82.154.188:30778 Nov 25 15:12:24.046: INFO: Poke("udp://34.82.154.188:30778"): read udp 10.60.68.165:59916->34.82.154.188:30778: read: connection refused Nov 25 15:12:26.007: INFO: Poking udp://34.82.154.188:30778 Nov 25 15:12:26.046: INFO: Poke("udp://34.82.154.188:30778"): read udp 10.60.68.165:52158->34.82.154.188:30778: read: connection refused Nov 25 15:12:28.007: INFO: Poking udp://34.82.154.188:30778 Nov 25 15:12:28.046: INFO: Poke("udp://34.82.154.188:30778"): read udp 10.60.68.165:34229->34.82.154.188:30778: read: connection refused Nov 25 15:12:30.008: INFO: Poking udp://34.82.154.188:30778 Nov 25 15:12:30.047: INFO: Poke("udp://34.82.154.188:30778"): read udp 10.60.68.165:55515->34.82.154.188:30778: read: connection refused Nov 25 15:12:32.007: INFO: Poking udp://34.82.154.188:30778 Nov 25 15:12:32.048: INFO: Poke("udp://34.82.154.188:30778"): success STEP: creating a static load balancer IP 11/25/22 15:12:32.048 Nov 25 15:12:34.119: INFO: Allocated static load balancer IP: 34.127.24.56 STEP: changing the UDP service to type=LoadBalancer 11/25/22 15:12:34.119 STEP: demoting the static IP to ephemeral 11/25/22 15:12:34.355 STEP: waiting for the UDP service to have a load balancer 11/25/22 15:12:35.911 Nov 25 15:12:35.911: INFO: Waiting up to 15m0s for service "mutability-test" to have a LoadBalancer Nov 25 15:13:48.043: INFO: UDP load balancer: 34.127.24.56 STEP: hitting the UDP service's NodePort 11/25/22 15:13:48.043 Nov 25 15:13:48.043: INFO: Poking udp://34.82.154.188:30778 Nov 25 15:13:48.084: INFO: Poke("udp://34.82.154.188:30778"): success STEP: hitting the UDP service's LoadBalancer 11/25/22 15:13:48.084 Nov 25 15:13:48.084: INFO: Poking udp://34.127.24.56:80 Nov 25 15:13:51.085: INFO: Poke("udp://34.127.24.56:80"): read udp 10.60.68.165:35803->34.127.24.56:80: i/o timeout Nov 25 15:13:53.085: INFO: Poking udp://34.127.24.56:80 Nov 25 15:13:53.126: INFO: Poke("udp://34.127.24.56:80"): success STEP: changing the UDP service's NodePort 11/25/22 15:13:53.126 Nov 25 15:13:53.560: INFO: UDP node port: 30779 STEP: hitting the UDP service's new NodePort 11/25/22 15:13:53.56 Nov 25 15:13:53.560: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:13:53.600: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:39210->34.82.154.188:30779: read: connection refused Nov 25 15:13:55.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:13:55.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:47794->34.82.154.188:30779: read: connection refused Nov 25 15:13:57.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:13:57.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:34272->34.82.154.188:30779: read: connection refused Nov 25 15:13:59.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:13:59.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:34921->34.82.154.188:30779: read: connection refused Nov 25 15:14:01.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:01.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:32952->34.82.154.188:30779: read: connection refused Nov 25 15:14:03.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:03.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:34639->34.82.154.188:30779: read: connection refused Nov 25 15:14:05.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:05.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:55567->34.82.154.188:30779: read: connection refused Nov 25 15:14:07.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:07.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:52004->34.82.154.188:30779: read: connection refused Nov 25 15:14:09.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:09.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:54393->34.82.154.188:30779: read: connection refused Nov 25 15:14:11.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:11.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:48585->34.82.154.188:30779: read: connection refused Nov 25 15:14:13.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:13.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:44398->34.82.154.188:30779: read: connection refused Nov 25 15:14:15.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:15.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:59702->34.82.154.188:30779: read: connection refused Nov 25 15:14:17.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:17.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:59414->34.82.154.188:30779: read: connection refused Nov 25 15:14:19.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:19.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:35295->34.82.154.188:30779: read: connection refused Nov 25 15:14:21.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:21.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:46334->34.82.154.188:30779: read: connection refused Nov 25 15:14:23.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:23.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:50306->34.82.154.188:30779: read: connection refused Nov 25 15:14:25.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:25.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:46904->34.82.154.188:30779: read: connection refused Nov 25 15:14:27.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:27.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:60657->34.82.154.188:30779: read: connection refused Nov 25 15:14:29.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:29.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:51573->34.82.154.188:30779: read: connection refused Nov 25 15:14:31.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:31.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:58321->34.82.154.188:30779: read: connection refused Nov 25 15:14:33.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:33.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:36002->34.82.154.188:30779: read: connection refused Nov 25 15:14:35.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:35.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:59826->34.82.154.188:30779: read: connection refused Nov 25 15:14:37.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:37.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:35741->34.82.154.188:30779: read: connection refused Nov 25 15:14:39.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:39.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:35158->34.82.154.188:30779: read: connection refused Nov 25 15:14:41.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:41.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:50961->34.82.154.188:30779: read: connection refused Nov 25 15:14:43.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:43.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:35866->34.82.154.188:30779: read: connection refused Nov 25 15:14:45.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:45.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:45494->34.82.154.188:30779: read: connection refused Nov 25 15:14:47.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:47.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:38917->34.82.154.188:30779: read: connection refused Nov 25 15:14:49.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:49.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:56034->34.82.154.188:30779: read: connection refused Nov 25 15:14:51.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:51.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:50154->34.82.154.188:30779: read: connection refused Nov 25 15:14:53.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:53.641: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:39570->34.82.154.188:30779: read: connection refused Nov 25 15:14:55.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:55.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:42808->34.82.154.188:30779: read: connection refused Nov 25 15:14:57.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:57.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:32999->34.82.154.188:30779: read: connection refused Nov 25 15:14:59.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:14:59.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:35664->34.82.154.188:30779: read: connection refused Nov 25 15:15:01.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:01.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:37077->34.82.154.188:30779: read: connection refused Nov 25 15:15:03.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:03.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:51093->34.82.154.188:30779: read: connection refused Nov 25 15:15:05.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:05.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:54236->34.82.154.188:30779: read: connection refused Nov 25 15:15:07.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:07.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:57978->34.82.154.188:30779: read: connection refused Nov 25 15:15:09.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:09.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:48372->34.82.154.188:30779: read: connection refused Nov 25 15:15:11.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:11.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:40690->34.82.154.188:30779: read: connection refused Nov 25 15:15:13.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:13.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:35283->34.82.154.188:30779: read: connection refused Nov 25 15:15:15.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:15.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:34752->34.82.154.188:30779: read: connection refused Nov 25 15:15:17.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:17.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:41719->34.82.154.188:30779: read: connection refused Nov 25 15:15:19.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:19.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:33547->34.82.154.188:30779: read: connection refused Nov 25 15:15:21.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:21.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:49869->34.82.154.188:30779: read: connection refused Nov 25 15:15:23.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:23.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:56511->34.82.154.188:30779: read: connection refused Nov 25 15:15:25.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:25.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:58943->34.82.154.188:30779: read: connection refused Nov 25 15:15:27.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:27.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:60787->34.82.154.188:30779: read: connection refused Nov 25 15:15:29.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:29.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:46860->34.82.154.188:30779: read: connection refused Nov 25 15:15:31.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:31.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:45128->34.82.154.188:30779: read: connection refused Nov 25 15:15:33.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:33.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:42610->34.82.154.188:30779: read: connection refused Nov 25 15:15:35.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:35.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:36443->34.82.154.188:30779: read: connection refused Nov 25 15:15:37.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:37.663: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:36303->34.82.154.188:30779: read: connection refused Nov 25 15:15:39.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:39.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:37482->34.82.154.188:30779: read: connection refused Nov 25 15:15:41.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:41.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:59745->34.82.154.188:30779: read: connection refused Nov 25 15:15:43.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:43.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:60673->34.82.154.188:30779: read: connection refused Nov 25 15:15:45.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:45.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:54949->34.82.154.188:30779: read: connection refused Nov 25 15:15:47.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:47.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:41243->34.82.154.188:30779: read: connection refused Nov 25 15:15:49.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:49.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:53075->34.82.154.188:30779: read: connection refused Nov 25 15:15:51.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:51.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:37375->34.82.154.188:30779: read: connection refused Nov 25 15:15:53.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:53.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:56860->34.82.154.188:30779: read: connection refused Nov 25 15:15:55.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:55.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:41864->34.82.154.188:30779: read: connection refused Nov 25 15:15:57.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:57.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:50474->34.82.154.188:30779: read: connection refused Nov 25 15:15:59.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:15:59.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:56905->34.82.154.188:30779: read: connection refused Nov 25 15:16:01.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:01.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:38235->34.82.154.188:30779: read: connection refused Nov 25 15:16:03.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:03.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:42455->34.82.154.188:30779: read: connection refused Nov 25 15:16:05.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:05.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:50560->34.82.154.188:30779: read: connection refused Nov 25 15:16:07.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:07.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:35084->34.82.154.188:30779: read: connection refused Nov 25 15:16:09.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:09.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:43914->34.82.154.188:30779: read: connection refused Nov 25 15:16:11.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:11.642: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:59297->34.82.154.188:30779: read: connection refused Nov 25 15:16:13.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:13.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:39205->34.82.154.188:30779: read: connection refused Nov 25 15:16:15.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:15.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:33324->34.82.154.188:30779: read: connection refused Nov 25 15:16:17.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:17.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:40204->34.82.154.188:30779: read: connection refused Nov 25 15:16:19.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:19.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:44182->34.82.154.188:30779: read: connection refused Nov 25 15:16:21.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:21.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:52494->34.82.154.188:30779: read: connection refused Nov 25 15:16:23.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:23.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:35487->34.82.154.188:30779: read: connection refused Nov 25 15:16:25.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:25.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:58137->34.82.154.188:30779: read: connection refused Nov 25 15:16:27.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:27.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:40720->34.82.154.188:30779: read: connection refused Nov 25 15:16:29.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:29.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:38019->34.82.154.188:30779: read: connection refused Nov 25 15:16:31.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:31.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:46683->34.82.154.188:30779: read: connection refused Nov 25 15:16:33.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:33.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:32870->34.82.154.188:30779: read: connection refused Nov 25 15:16:35.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:35.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:53533->34.82.154.188:30779: read: connection refused Nov 25 15:16:37.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:37.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:39691->34.82.154.188:30779: read: connection refused Nov 25 15:16:39.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:39.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:55809->34.82.154.188:30779: read: connection refused Nov 25 15:16:41.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:41.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:40096->34.82.154.188:30779: read: connection refused Nov 25 15:16:43.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:43.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:47320->34.82.154.188:30779: read: connection refused Nov 25 15:16:45.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:45.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:48364->34.82.154.188:30779: read: connection refused Nov 25 15:16:47.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:47.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:40484->34.82.154.188:30779: read: connection refused Nov 25 15:16:49.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:49.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:43481->34.82.154.188:30779: read: connection refused Nov 25 15:16:51.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:51.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:35103->34.82.154.188:30779: read: connection refused Nov 25 15:16:53.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:53.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:47034->34.82.154.188:30779: read: connection refused Nov 25 15:16:55.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:55.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:52717->34.82.154.188:30779: read: connection refused Nov 25 15:16:57.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:57.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:37615->34.82.154.188:30779: read: connection refused Nov 25 15:16:59.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:16:59.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:37545->34.82.154.188:30779: read: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 5m1.391s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 3m7.303s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 10248 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000a81a40, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00017e680?, 0xc0035dfcb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc000fa6570?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0034e5210, 0xd}, 0x783b, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001b66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:17:01.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:01.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:51101->34.82.154.188:30779: read: connection refused Nov 25 15:17:03.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:03.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:38304->34.82.154.188:30779: read: connection refused Nov 25 15:17:05.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:05.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:40874->34.82.154.188:30779: read: connection refused Nov 25 15:17:07.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:07.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:47175->34.82.154.188:30779: read: connection refused Nov 25 15:17:09.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:09.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:37974->34.82.154.188:30779: read: connection refused Nov 25 15:17:11.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:11.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:48516->34.82.154.188:30779: read: connection refused Nov 25 15:17:13.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:13.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:40721->34.82.154.188:30779: read: connection refused Nov 25 15:17:15.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:15.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:55504->34.82.154.188:30779: read: connection refused Nov 25 15:17:17.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:17.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:33857->34.82.154.188:30779: read: connection refused Nov 25 15:17:19.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:19.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:42426->34.82.154.188:30779: read: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 5m21.393s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m20.003s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 3m27.305s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 10248 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000a81a40, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00017e680?, 0xc0035dfcb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc000fa6570?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0034e5210, 0xd}, 0x783b, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001b66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:17:21.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:21.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:50793->34.82.154.188:30779: read: connection refused Nov 25 15:17:23.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:23.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:52818->34.82.154.188:30779: read: connection refused Nov 25 15:17:25.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:25.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:55971->34.82.154.188:30779: read: connection refused Nov 25 15:17:27.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:27.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:33647->34.82.154.188:30779: read: connection refused Nov 25 15:17:29.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:29.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:42447->34.82.154.188:30779: read: connection refused Nov 25 15:17:31.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:31.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:52245->34.82.154.188:30779: read: connection refused Nov 25 15:17:33.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:33.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:41527->34.82.154.188:30779: read: connection refused Nov 25 15:17:35.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:35.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:55448->34.82.154.188:30779: read: connection refused Nov 25 15:17:37.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:37.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:39677->34.82.154.188:30779: read: connection refused Nov 25 15:17:39.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:39.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:47612->34.82.154.188:30779: read: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 5m41.395s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m40.004s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 3m47.307s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 10248 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000a81a40, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00017e680?, 0xc0035dfcb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc000fa6570?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0034e5210, 0xd}, 0x783b, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001b66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:17:41.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:41.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:41922->34.82.154.188:30779: read: connection refused Nov 25 15:17:43.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:43.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:52719->34.82.154.188:30779: read: connection refused Nov 25 15:17:45.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:45.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:34356->34.82.154.188:30779: read: connection refused Nov 25 15:17:47.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:47.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:57368->34.82.154.188:30779: read: connection refused Nov 25 15:17:49.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:49.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:36587->34.82.154.188:30779: read: connection refused Nov 25 15:17:51.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:51.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:34408->34.82.154.188:30779: read: connection refused Nov 25 15:17:53.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:53.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:44219->34.82.154.188:30779: read: connection refused Nov 25 15:17:55.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:55.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:59269->34.82.154.188:30779: read: connection refused Nov 25 15:17:57.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:57.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:60715->34.82.154.188:30779: read: connection refused Nov 25 15:17:59.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:17:59.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:44739->34.82.154.188:30779: read: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m1.397s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m0.006s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 4m7.309s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 10248 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000a81a40, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00017e680?, 0xc0035dfcb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc000fa6570?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0034e5210, 0xd}, 0x783b, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001b66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:18:01.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:01.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:42163->34.82.154.188:30779: read: connection refused Nov 25 15:18:03.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:03.639: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:39471->34.82.154.188:30779: read: connection refused Nov 25 15:18:05.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:05.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:52066->34.82.154.188:30779: read: connection refused Nov 25 15:18:07.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:07.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:37751->34.82.154.188:30779: read: connection refused Nov 25 15:18:09.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:09.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:49487->34.82.154.188:30779: read: connection refused Nov 25 15:18:11.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:11.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:34399->34.82.154.188:30779: read: connection refused Nov 25 15:18:13.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:13.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:42340->34.82.154.188:30779: read: connection refused Nov 25 15:18:15.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:15.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:38465->34.82.154.188:30779: read: connection refused Nov 25 15:18:17.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:17.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:40294->34.82.154.188:30779: read: connection refused Nov 25 15:18:19.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:19.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:41388->34.82.154.188:30779: read: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m21.399s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m20.008s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 4m27.311s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 10248 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000a81a40, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00017e680?, 0xc0035dfcb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc000fa6570?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0034e5210, 0xd}, 0x783b, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001b66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:18:21.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:21.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:55901->34.82.154.188:30779: read: connection refused Nov 25 15:18:23.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:23.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:56740->34.82.154.188:30779: read: connection refused Nov 25 15:18:25.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:25.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:52000->34.82.154.188:30779: read: connection refused Nov 25 15:18:27.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:27.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:38625->34.82.154.188:30779: read: connection refused Nov 25 15:18:29.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:29.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:40713->34.82.154.188:30779: read: connection refused Nov 25 15:18:31.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:31.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:55794->34.82.154.188:30779: read: connection refused Nov 25 15:18:33.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:33.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:35922->34.82.154.188:30779: read: connection refused Nov 25 15:18:35.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:35.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:47720->34.82.154.188:30779: read: connection refused Nov 25 15:18:37.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:37.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:57230->34.82.154.188:30779: read: connection refused Nov 25 15:18:39.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:39.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:60529->34.82.154.188:30779: read: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m41.401s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m40.01s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's new NodePort (Step Runtime: 4m47.313s) test/e2e/network/loadbalancer.go:410 Spec Goroutine goroutine 10248 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000a81a40, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00017e680?, 0xc0035dfcb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc000fa6570?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0034e5210, 0xd}, 0x783b, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001b66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:18:41.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:41.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:42686->34.82.154.188:30779: read: connection refused Nov 25 15:18:43.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:43.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:33831->34.82.154.188:30779: read: connection refused Nov 25 15:18:45.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:45.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:56503->34.82.154.188:30779: read: connection refused Nov 25 15:18:47.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:47.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:38211->34.82.154.188:30779: read: connection refused Nov 25 15:18:49.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:49.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:41668->34.82.154.188:30779: read: connection refused Nov 25 15:18:51.600: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:51.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:34883->34.82.154.188:30779: read: connection refused Nov 25 15:18:53.601: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:53.640: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:41549->34.82.154.188:30779: read: connection refused Nov 25 15:18:53.640: INFO: Poking udp://34.82.154.188:30779 Nov 25 15:18:53.679: INFO: Poke("udp://34.82.154.188:30779"): read udp 10.60.68.165:38497->34.82.154.188:30779: read: connection refused Nov 25 15:18:53.679: FAIL: Could not reach UDP service through 34.82.154.188:30779 after 5m0s: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0034e5210, 0xd}, 0x783b, 0x0?) test/e2e/network/service.go:604 +0x17b k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:411 +0xe09 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 15:18:53.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 15:18:53.719: INFO: Output of kubectl describe svc: Nov 25 15:18:53.719: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-1724 describe svc --namespace=loadbalancers-1724' Nov 25 15:18:53.831: INFO: rc: 1 Nov 25 15:18:53.831: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:18:53.831 STEP: Collecting events from namespace "loadbalancers-1724". 11/25/22 15:18:53.831 Nov 25 15:18:53.870: INFO: Unexpected error: failed to list events in namespace "loadbalancers-1724": <*url.Error | 0xc004cf7050>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/loadbalancers-1724/events", Err: <*net.OpError | 0xc00551bd10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004d3d020>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0012637e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:18:53.870: FAIL: failed to list events in namespace "loadbalancers-1724": Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-1724/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0009d65c0, {0xc0025921b0, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc004f26680}, {0xc0025921b0, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0009d6650?, {0xc0025921b0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000d6c4b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0016ae880?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0016ae880?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-1724" for this suite. 11/25/22 15:18:53.871 Nov 25 15:18:53.910: FAIL: Couldn't delete ns: "loadbalancers-1724": Delete "https://34.82.189.151/api/v1/namespaces/loadbalancers-1724": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/loadbalancers-1724", Err:(*net.OpError)(0xc004884000)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d6c4b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0016ae7e0?, 0x66e0100?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc000017830?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0016ae7e0?, 0xc001701f68?}, {0xae73300?, 0x801de88?, 0xc001a4a820?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\screate\sLoadBalancer\sService\swithout\sNodePort\sand\schange\sit\s\[Slow\]$'
test/e2e/network/loadbalancer.go:959 k8s.io/kubernetes/test/e2e/network.glob..func19.13() test/e2e/network/loadbalancer.go:959 +0xdab There were additional failures detected after the initial failure: [FAILED] Nov 25 14:59:57.546: failed to list events in namespace "loadbalancers-7916": Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-7916/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 14:59:57.586: Couldn't delete ns: "loadbalancers-7916": Delete "https://34.82.189.151/api/v1/namespaces/loadbalancers-7916": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/loadbalancers-7916", Err:(*net.OpError)(0xc00366d7c0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 14:58:49.047 Nov 25 14:58:49.047: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 14:58:49.049 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 14:58:49.177 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 14:58:49.289 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to create LoadBalancer Service without NodePort and change it [Slow] test/e2e/network/loadbalancer.go:850 Nov 25 14:58:49.497: INFO: namespace for TCP test: loadbalancers-7916 STEP: creating a TCP service reallocate-nodeport-test with type=ClusterIP in namespace loadbalancers-7916 11/25/22 14:58:49.539 Nov 25 14:58:49.593: INFO: service port TCP: 80 STEP: creating a pod to be part of the TCP service reallocate-nodeport-test 11/25/22 14:58:49.593 Nov 25 14:58:49.640: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 14:58:49.688: INFO: Found all 1 pods Nov 25 14:58:49.688: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [reallocate-nodeport-test-mkwml] Nov 25 14:58:49.688: INFO: Waiting up to 2m0s for pod "reallocate-nodeport-test-mkwml" in namespace "loadbalancers-7916" to be "running and ready" Nov 25 14:58:49.730: INFO: Pod "reallocate-nodeport-test-mkwml": Phase="Pending", Reason="", readiness=false. Elapsed: 41.701029ms Nov 25 14:58:49.730: INFO: Error evaluating pod condition running and ready: want pod 'reallocate-nodeport-test-mkwml' on 'bootstrap-e2e-minion-group-cs2j' to be 'Running' but was 'Pending' Nov 25 14:58:51.773: INFO: Pod "reallocate-nodeport-test-mkwml": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084644066s Nov 25 14:58:51.773: INFO: Error evaluating pod condition running and ready: want pod 'reallocate-nodeport-test-mkwml' on 'bootstrap-e2e-minion-group-cs2j' to be 'Running' but was 'Pending' Nov 25 14:58:53.777: INFO: Pod "reallocate-nodeport-test-mkwml": Phase="Running", Reason="", readiness=false. Elapsed: 4.088782708s Nov 25 14:58:53.777: INFO: Error evaluating pod condition running and ready: pod 'reallocate-nodeport-test-mkwml' on 'bootstrap-e2e-minion-group-cs2j' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:58:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:58:51 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:58:51 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:58:49 +0000 UTC }] Nov 25 14:58:55.811: INFO: Pod "reallocate-nodeport-test-mkwml": Phase="Running", Reason="", readiness=true. Elapsed: 6.122767017s Nov 25 14:58:55.811: INFO: Pod "reallocate-nodeport-test-mkwml" satisfied condition "running and ready" Nov 25 14:58:55.811: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [reallocate-nodeport-test-mkwml] STEP: creating a static load balancer IP 11/25/22 14:58:55.811 Nov 25 14:58:58.064: INFO: Allocated static load balancer IP: 34.145.90.42 STEP: changing the TCP service to type=LoadBalancer 11/25/22 14:58:58.064 STEP: waiting for the TCP service to have a load balancer 11/25/22 14:58:58.204 Nov 25 14:58:58.204: INFO: Waiting up to 15m0s for service "reallocate-nodeport-test" to have a LoadBalancer Nov 25 14:59:48.370: INFO: TCP load balancer: 34.145.90.42 STEP: demoting the static IP to ephemeral 11/25/22 14:59:48.37 STEP: hitting the TCP service's LoadBalancer 11/25/22 14:59:50.007 Nov 25 14:59:50.007: INFO: Poking "http://34.145.90.42:80/echo?msg=hello" Nov 25 14:59:57.241: INFO: Poke("http://34.145.90.42:80/echo?msg=hello"): success STEP: adding a TCP service's NodePort 11/25/22 14:59:57.241 Nov 25 14:59:57.280: INFO: Unexpected error: <*errors.errorString | 0xc003786810>: { s: "failed to get Service \"reallocate-nodeport-test\": Get \"https://34.82.189.151/api/v1/namespaces/loadbalancers-7916/services/reallocate-nodeport-test\": dial tcp 34.82.189.151:443: connect: connection refused", } Nov 25 14:59:57.280: FAIL: failed to get Service "reallocate-nodeport-test": Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-7916/services/reallocate-nodeport-test": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.13() test/e2e/network/loadbalancer.go:959 +0xdab [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 14:59:57.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 14:59:57.321: INFO: Output of kubectl describe svc: Nov 25 14:59:57.321: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-7916 describe svc --namespace=loadbalancers-7916' Nov 25 14:59:57.505: INFO: rc: 1 Nov 25 14:59:57.506: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 14:59:57.506 STEP: Collecting events from namespace "loadbalancers-7916". 11/25/22 14:59:57.506 Nov 25 14:59:57.545: INFO: Unexpected error: failed to list events in namespace "loadbalancers-7916": <*url.Error | 0xc00367bc50>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/loadbalancers-7916/events", Err: <*net.OpError | 0xc003629540>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003712960>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0036769a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 14:59:57.546: FAIL: failed to list events in namespace "loadbalancers-7916": Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-7916/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc003d3e5c0, {0xc002a14840, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc000521ba0}, {0xc002a14840, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc003d3e650?, {0xc002a14840?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000e684b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001d4dc70?, 0xc004657f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc004657f40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001d4dc70?, 0x2622c40?}, {0xae73300?, 0xc004657f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-7916" for this suite. 11/25/22 14:59:57.546 Nov 25 14:59:57.586: FAIL: Couldn't delete ns: "loadbalancers-7916": Delete "https://34.82.189.151/api/v1/namespaces/loadbalancers-7916": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/loadbalancers-7916", Err:(*net.OpError)(0xc00366d7c0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000e684b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001d4dbf0?, 0x66e0100?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x6574222c22737961?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001d4dbf0?, 0xc003d62f68?}, {0xae73300?, 0x801de88?, 0xc000521ba0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\screate\san\sinternal\stype\sload\sbalancer\s\[Slow\]$'
test/e2e/network/loadbalancer.go:655 k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:655 +0x832 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:25:09.908: failed to list events in namespace "loadbalancers-6423": Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-6423/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:25:09.948: Couldn't delete ns: "loadbalancers-6423": Delete "https://34.82.189.151/api/v1/namespaces/loadbalancers-6423": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/loadbalancers-6423", Err:(*net.OpError)(0xc004ee01e0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:12:27.402 Nov 25 15:12:27.402: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 15:12:27.404 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:12:27.694 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:12:27.792 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to create an internal type load balancer [Slow] test/e2e/network/loadbalancer.go:571 STEP: creating pod to be part of service lb-internal 11/25/22 15:12:28.091 Nov 25 15:12:28.147: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 15:12:28.216: INFO: Found all 1 pods Nov 25 15:12:28.216: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [lb-internal-5mvbj] Nov 25 15:12:28.216: INFO: Waiting up to 2m0s for pod "lb-internal-5mvbj" in namespace "loadbalancers-6423" to be "running and ready" Nov 25 15:12:28.369: INFO: Pod "lb-internal-5mvbj": Phase="Pending", Reason="", readiness=false. Elapsed: 152.803273ms Nov 25 15:12:28.369: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-5mvbj' on 'bootstrap-e2e-minion-group-xfgk' to be 'Running' but was 'Pending' Nov 25 15:12:30.453: INFO: Pod "lb-internal-5mvbj": Phase="Running", Reason="", readiness=true. Elapsed: 2.237235736s Nov 25 15:12:30.453: INFO: Pod "lb-internal-5mvbj" satisfied condition "running and ready" Nov 25 15:12:30.453: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [lb-internal-5mvbj] STEP: creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled 11/25/22 15:12:30.453 Nov 25 15:12:30.838: INFO: Waiting up to 15m0s for service "lb-internal" to have a LoadBalancer STEP: hitting the internal load balancer from pod 11/25/22 15:13:19.35 Nov 25 15:13:19.350: INFO: creating pod with host network Nov 25 15:13:19.350: INFO: Creating new host exec pod Nov 25 15:13:19.490: INFO: Waiting up to 5m0s for pod "ilb-host-exec" in namespace "loadbalancers-6423" to be "running and ready" Nov 25 15:13:19.583: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 93.288069ms Nov 25 15:13:19.583: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:13:21.643: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153116238s Nov 25 15:13:21.643: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:13:23.707: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216836728s Nov 25 15:13:23.707: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:13:25.717: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227069967s Nov 25 15:13:25.717: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:13:27.709: INFO: Pod "ilb-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 8.219076951s Nov 25 15:13:27.709: INFO: The phase of Pod ilb-host-exec is Running (Ready = true) Nov 25 15:13:27.709: INFO: Pod "ilb-host-exec" satisfied condition "running and ready" Nov 25 15:13:27.709: INFO: Waiting up to 15m0s for service "lb-internal"'s internal LB to respond to requests Nov 25 15:13:27.709: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:13:28.890: INFO: rc: 7 Nov 25 15:13:28.890: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 4 ms: Connection refused command terminated with exit code 7 error: exit status 7 Nov 25 15:13:48.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:13:49.502: INFO: rc: 1 Nov 25 15:13:49.502: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 15:14:08.891: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:14:09.004: INFO: rc: 1 Nov 25 15:14:09.004: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:14:28.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:14:29.005: INFO: rc: 1 Nov 25 15:14:29.005: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:14:48.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:14:49.003: INFO: rc: 1 Nov 25 15:14:49.003: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:15:08.891: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:15:09.003: INFO: rc: 1 Nov 25 15:15:09.004: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:15:28.891: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:15:29.003: INFO: rc: 1 Nov 25 15:15:29.003: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:15:48.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:15:48.998: INFO: rc: 1 Nov 25 15:15:48.998: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:16:08.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:16:08.997: INFO: rc: 1 Nov 25 15:16:08.997: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:16:28.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:16:28.995: INFO: rc: 1 Nov 25 15:16:28.996: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:16:48.891: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:16:49.002: INFO: rc: 1 Nov 25 15:16:49.002: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 Nov 25 15:17:08.891: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:17:09.005: INFO: rc: 1 Nov 25 15:17:09.006: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 5m0.613s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 4m8.665s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:17:28.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:17:29.004: INFO: rc: 1 Nov 25 15:17:29.004: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 5m20.615s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m20.002s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 4m28.667s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:17:48.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:17:49.000: INFO: rc: 1 Nov 25 15:17:49.000: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 5m40.617s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m40.004s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 4m48.669s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:18:08.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:18:09.001: INFO: rc: 1 Nov 25 15:18:09.001: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 6m0.619s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 6m0.006s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 5m8.671s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:18:28.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:18:29.000: INFO: rc: 1 Nov 25 15:18:29.000: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 6m20.621s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 6m20.008s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 5m28.673s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:18:48.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:18:48.998: INFO: rc: 1 Nov 25 15:18:48.998: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 6m40.624s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 6m40.011s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 5m48.676s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:19:08.891: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:19:09.001: INFO: rc: 1 Nov 25 15:19:09.001: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 7m0.625s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 7m0.012s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 6m8.678s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:19:28.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:19:28.998: INFO: rc: 1 Nov 25 15:19:28.998: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: The connection to the server 34.82.189.151 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 7m20.627s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 7m20.014s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 6m28.679s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:19:48.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:19:49.217: INFO: rc: 1 Nov 25 15:19:49.217: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 7m40.629s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 7m40.016s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 6m48.681s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:20:08.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:20:09.418: INFO: rc: 7 Nov 25 15:20:09.418: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 1 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 8m0.631s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 8m0.018s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 7m8.683s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:20:28.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:20:29.410: INFO: rc: 7 Nov 25 15:20:29.410: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 0 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 8m20.633s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 8m20.02s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 7m28.685s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:20:48.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:20:49.440: INFO: rc: 7 Nov 25 15:20:49.440: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 18 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 8m40.634s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 8m40.021s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 7m48.686s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:21:08.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:21:09.476: INFO: rc: 7 Nov 25 15:21:09.476: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 1 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 9m0.636s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 9m0.023s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 8m8.688s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:21:28.891: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:21:29.463: INFO: rc: 7 Nov 25 15:21:29.463: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 0 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 9m20.638s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 9m20.025s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 8m28.69s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:21:48.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:21:49.408: INFO: rc: 7 Nov 25 15:21:49.408: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 0 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 9m40.64s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 9m40.027s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 8m48.692s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:22:08.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:22:09.409: INFO: rc: 7 Nov 25 15:22:09.409: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 0 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 10m0.641s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 10m0.028s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 9m8.694s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:22:28.891: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:22:29.218: INFO: rc: 1 Nov 25 15:22:29.218: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 10m20.643s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 10m20.03s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 9m28.695s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:22:48.891: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:22:49.415: INFO: rc: 7 Nov 25 15:22:49.415: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 0 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 10m40.645s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 10m40.032s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 9m48.697s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:23:08.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:23:09.416: INFO: rc: 7 Nov 25 15:23:09.416: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 0 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 11m0.647s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 11m0.034s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 10m8.699s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:23:28.890: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:23:29.409: INFO: rc: 7 Nov 25 15:23:29.409: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: + curl -m 5 'http://10.138.0.6:80/echo?msg=hello' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 10.138.0.6 port 80 after 0 ms: Connection refused command terminated with exit code 7 error: exit status 7 ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 11m20.649s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 11m20.036s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 10m28.701s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0002e9ea8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:23:48.891: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 15:23:49.427: INFO: stderr: "+ curl -m 5 'http://10.138.0.6:80/echo?msg=hello'\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 5 100 5 0 0 5347 0 --:--:-- --:--:-- --:--:-- 5000\n" Nov 25 15:23:49.427: INFO: stdout: "hello" Nov 25 15:23:49.427: INFO: Successful curl; stdout: hello STEP: switching to external type LoadBalancer 11/25/22 15:23:49.427 Nov 25 15:23:49.601: INFO: Waiting up to 15m0s for service "lb-internal" to have an external LoadBalancer ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 11m40.651s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 11m40.038s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 18.626s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ec0708, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 12m0.654s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 12m0.041s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 38.628s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 2821 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ec0708, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 12m20.655s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 12m20.042s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 58.63s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ec0708, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 12m40.657s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 12m40.044s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 1m18.632s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 2821 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ec0708, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc001a21d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002018600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 15:25:09.682: FAIL: Loadbalancer IP not changed to external. Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:655 +0x832 STEP: Clean up loadbalancer service 11/25/22 15:25:09.682 STEP: Delete service with finalizer 11/25/22 15:25:09.682 Nov 25 15:25:09.722: FAIL: Failed to delete service loadbalancers-6423/lb-internal Full Stack Trace k8s.io/kubernetes/test/e2e/framework/service.WaitForServiceDeletedWithFinalizer({0x801de88, 0xc0024da000}, {0xc001ec06c0, 0x12}, {0xc004a068c0, 0xb}) test/e2e/framework/service/wait.go:37 +0x185 k8s.io/kubernetes/test/e2e/network.glob..func19.6.3() test/e2e/network/loadbalancer.go:602 +0x67 panic({0x70eb7e0, 0xc0021f49a0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Failf({0x7695064?, 0x4?}, {0x0?, 0x40?, 0xc001a21f20?}) test/e2e/framework/log.go:49 +0x12c k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:655 +0x832 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 15:25:09.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 15:25:09.762: INFO: Output of kubectl describe svc: Nov 25 15:25:09.762: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6423 describe svc --namespace=loadbalancers-6423' Nov 25 15:25:09.868: INFO: rc: 1 Nov 25 15:25:09.868: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:25:09.868 STEP: Collecting events from namespace "loadbalancers-6423". 11/25/22 15:25:09.868 Nov 25 15:25:09.908: INFO: Unexpected error: failed to list events in namespace "loadbalancers-6423": <*url.Error | 0xc005318a80>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/loadbalancers-6423/events", Err: <*net.OpError | 0xc00315e960>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002bc45a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0018a19e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:25:09.908: FAIL: failed to list events in namespace "loadbalancers-6423": Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-6423/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc000cb45c0, {0xc0007abc08, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0024da000}, {0xc0007abc08, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc000cb4650?, {0xc0007abc08?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00137e4b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc002d30710?, 0xc003d2df50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc003d2df40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc002d30710?, 0x2622c40?}, {0xae73300?, 0xc003d2df80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-6423" for this suite. 11/25/22 15:25:09.908 Nov 25 15:25:09.948: FAIL: Couldn't delete ns: "loadbalancers-6423": Delete "https://34.82.189.151/api/v1/namespaces/loadbalancers-6423": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/loadbalancers-6423", Err:(*net.OpError)(0xc004ee01e0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00137e4b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc002d30690?, 0xc003c5efb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc002d30690?, 0x0?}, {0xae73300?, 0x5?, 0xc002b96690?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\sswitch\ssession\saffinity\sfor\sLoadBalancer\sservice\swith\sESIPP\soff\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0012b21e0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:07:56.128 Nov 25 15:07:56.128: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 15:07:56.129 Nov 25 15:07:56.169: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:07:58.209: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:00.209: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:02.209: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:04.208: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:06.209: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:08.208: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:10.208: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:12.209: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:14.209: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:16.209: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:18.208: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:20.209: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:22.208: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:24.209: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:26.210: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:26.249: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:08:26.249: INFO: Unexpected error: <*errors.errorString | 0xc00017da20>: { s: "timed out waiting for the condition", } Nov 25 15:08:26.249: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0012b21e0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 15:08:26.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:08:26.289 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\sswitch\ssession\saffinity\sfor\sLoadBalancer\sservice\swith\sESIPP\son\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/network/service.go:3978 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x7638a85?, {0x801de88, 0xc002e68000}, 0xc003128c80, 0x1) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithTransition(...) test/e2e/network/service.go:3962 k8s.io/kubernetes/test/e2e/network.glob..func19.9() test/e2e/network/loadbalancer.go:787 +0xf3from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 14:58:58.655 Nov 25 14:58:58.655: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 14:58:58.659 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 14:58:58.942 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 14:58:59.089 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly] test/e2e/network/loadbalancer.go:780 STEP: creating service in namespace loadbalancers-9756 11/25/22 14:58:59.358 STEP: creating service affinity-lb-esipp-transition in namespace loadbalancers-9756 11/25/22 14:58:59.359 STEP: creating replication controller affinity-lb-esipp-transition in namespace loadbalancers-9756 11/25/22 14:58:59.504 I1125 14:58:59.568366 10187 runners.go:193] Created replication controller with name: affinity-lb-esipp-transition, namespace: loadbalancers-9756, replica count: 3 I1125 14:59:02.669433 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:05.670026 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:08.670257 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:11.671139 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:14.672182 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:17.673326 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:20.674429 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:23.674795 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:26.675156 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 0 pending, 3 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:29.676106 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:32.676675 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:35.677431 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:38.678724 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1125 14:59:41.679055 10187 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 14:59:41.679103 10187 runners.go:193] Logging node info for node bootstrap-e2e-minion-group-xfgk I1125 14:59:41.840545 10187 runners.go:193] Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-xfgk ba54c0d2-29af-426e-a049-7278d60a9490 2133 0 2022-11-25 14:55:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-xfgk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-xfgk topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5560":"bootstrap-e2e-minion-group-xfgk","csi-mock-csi-mock-volumes-290":"csi-mock-csi-mock-volumes-290"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 14:55:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 14:58:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 14:59:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-xfgk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:50 +0000 UTC,LastTransitionTime:2022-11-25 14:55:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 14:59:08 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 14:59:08 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 14:59:08 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 14:59:08 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.196.45,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-xfgk.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-xfgk.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:35564864f08206045e292b7e32d4bbba,SystemUUID:35564864-f082-0604-5e29-2b7e32d4bbba,BootID:303b460c-3762-4624-8d44-d7a3124b5e6c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-5560^a7b41a64-6cd1-11ed-90f7-ee2d44c6e29f],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-5560^a7b41a64-6cd1-11ed-90f7-ee2d44c6e29f,DevicePath:,},},Config:nil,},} I1125 14:59:41.840961 10187 runners.go:193] Logging kubelet events for node bootstrap-e2e-minion-group-xfgk I1125 14:59:41.933175 10187 runners.go:193] Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-xfgk I1125 14:59:42.125327 10187 runners.go:193] metadata-proxy-v0.1-nfk54 started at 2022-11-25 14:55:35 +0000 UTC (0+2 container statuses recorded) I1125 14:59:42.125365 10187 runners.go:193] Container metadata-proxy ready: true, restart count 0 I1125 14:59:42.125370 10187 runners.go:193] Container prometheus-to-sd-exporter ready: true, restart count 0 I1125 14:59:42.125374 10187 runners.go:193] metrics-server-v0.5.2-867b8754b9-4d9k2 started at 2022-11-25 14:55:55 +0000 UTC (0+2 container statuses recorded) I1125 14:59:42.125380 10187 runners.go:193] Container metrics-server ready: false, restart count 2 I1125 14:59:42.125384 10187 runners.go:193] Container metrics-server-nanny ready: false, restart count 2 I1125 14:59:42.125387 10187 runners.go:193] test-hostpath-type-4d99c started at 2022-11-25 14:59:37 +0000 UTC (0+1 container statuses recorded) I1125 14:59:42.125392 10187 runners.go:193] Container host-path-sh-testing ready: false, restart count 0 I1125 14:59:42.125396 10187 runners.go:193] csi-hostpathplugin-0 started at 2022-11-25 14:58:32 +0000 UTC (0+7 container statuses recorded) I1125 14:59:42.125401 10187 runners.go:193] Container csi-attacher ready: true, restart count 2 I1125 14:59:42.125405 10187 runners.go:193] Container csi-provisioner ready: true, restart count 2 I1125 14:59:42.125408 10187 runners.go:193] Container csi-resizer ready: true, restart count 2 I1125 14:59:42.125411 10187 runners.go:193] Container csi-snapshotter ready: true, restart count 2 I1125 14:59:42.125414 10187 runners.go:193] Container hostpath ready: true, restart count 2 I1125 14:59:42.125418 10187 runners.go:193] Container liveness-probe ready: true, restart count 2 I1125 14:59:42.125421 10187 runners.go:193] Container node-driver-registrar ready: true, restart count 2 I1125 14:59:42.125425 10187 runners.go:193] pod-590f7d35-2f3d-495d-bd05-1b5354a0e9cc started at 2022-11-25 14:58:45 +0000 UTC (0+1 container statuses recorded) I1125 14:59:42.125429 10187 runners.go:193] Container write-pod ready: false, restart count 0 I1125 14:59:42.125432 10187 runners.go:193] konnectivity-agent-sz497 started at 2022-11-25 14:55:50 +0000 UTC (0+1 container statuses recorded) I1125 14:59:42.125438 10187 runners.go:193] Container konnectivity-agent ready: true, restart count 0 I1125 14:59:42.125441 10187 runners.go:193] local-io-client started at 2022-11-25 14:59:28 +0000 UTC (1+1 container statuses recorded) I1125 14:59:42.125446 10187 runners.go:193] Init container local-io-init ready: true, restart count 0 I1125 14:59:42.125450 10187 runners.go:193] Container local-io-client ready: false, restart count 0 I1125 14:59:42.125453 10187 runners.go:193] pod-configmaps-04565d9c-c879-4e8e-9fe4-0833d5d0f610 started at 2022-11-25 14:58:30 +0000 UTC (0+1 container statuses recorded) I1125 14:59:42.125458 10187 runners.go:193] Container agnhost-container ready: false, restart count 0 I1125 14:59:42.125461 10187 runners.go:193] kube-proxy-bootstrap-e2e-minion-group-xfgk started at 2022-11-25 14:55:34 +0000 UTC (0+1 container statuses recorded) I1125 14:59:42.125466 10187 runners.go:193] Container kube-proxy ready: false, restart count 2 I1125 14:59:42.125469 10187 runners.go:193] hostexec-bootstrap-e2e-minion-group-xfgk-x8ttp started at 2022-11-25 14:58:51 +0000 UTC (0+1 container statuses recorded) I1125 14:59:42.125474 10187 runners.go:193] Container agnhost-container ready: true, restart count 0 I1125 14:59:42.125477 10187 runners.go:193] affinity-lb-esipp-transition-228t6 started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) I1125 14:59:42.125482 10187 runners.go:193] Container affinity-lb-esipp-transition ready: true, restart count 1 I1125 14:59:42.125485 10187 runners.go:193] var-expansion-39f058ab-2eab-4367-85ce-d5109afbf080 started at 2022-11-25 14:58:30 +0000 UTC (0+1 container statuses recorded) I1125 14:59:42.125489 10187 runners.go:193] Container dapi-container ready: false, restart count 0 I1125 14:59:42.125492 10187 runners.go:193] pvc-volume-tester-qwbpr started at <nil> (0+0 container statuses recorded) I1125 14:59:42.125533 10187 runners.go:193] csi-mockplugin-0 started at 2022-11-25 14:58:32 +0000 UTC (0+4 container statuses recorded) I1125 14:59:42.125538 10187 runners.go:193] Container busybox ready: true, restart count 1 I1125 14:59:42.125541 10187 runners.go:193] Container csi-provisioner ready: true, restart count 1 I1125 14:59:42.125544 10187 runners.go:193] Container driver-registrar ready: true, restart count 1 I1125 14:59:42.125548 10187 runners.go:193] Container mock ready: true, restart count 1 I1125 14:59:42.672080 10187 runners.go:193] Latency metrics for node bootstrap-e2e-minion-group-xfgk I1125 14:59:42.835332 10187 runners.go:193] Running kubectl logs on non-ready containers in loadbalancers-9756 Nov 25 14:59:42.835: INFO: Unexpected error: failed to create replication controller with service in the namespace: loadbalancers-9756: <*errors.errorString | 0xc003b442c0>: { s: "1 containers failed which is more than allowed 0", } Nov 25 14:59:42.835: FAIL: failed to create replication controller with service in the namespace: loadbalancers-9756: 1 containers failed which is more than allowed 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x7638a85?, {0x801de88, 0xc002e68000}, 0xc003128c80, 0x1) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithTransition(...) test/e2e/network/service.go:3962 k8s.io/kubernetes/test/e2e/network.glob..func19.9() test/e2e/network/loadbalancer.go:787 +0xf3 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 14:59:42.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 14:59:42.998: INFO: Output of kubectl describe svc: Nov 25 14:59:42.998: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-9756 describe svc --namespace=loadbalancers-9756' Nov 25 14:59:43.603: INFO: stderr: "" Nov 25 14:59:43.603: INFO: stdout: "Name: affinity-lb-esipp-transition\nNamespace: loadbalancers-9756\nLabels: <none>\nAnnotations: <none>\nSelector: name=affinity-lb-esipp-transition\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.133.54\nIPs: 10.0.133.54\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nNodePort: <unset> 32337/TCP\nEndpoints: 10.64.0.36:9376,10.64.2.25:9376,10.64.3.23:9376\nSession Affinity: ClientIP\nExternal Traffic Policy: Local\nHealthCheck NodePort: 32481\nEvents: <none>\n" Nov 25 14:59:43.603: INFO: Name: affinity-lb-esipp-transition Namespace: loadbalancers-9756 Labels: <none> Annotations: <none> Selector: name=affinity-lb-esipp-transition Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.133.54 IPs: 10.0.133.54 Port: <unset> 80/TCP TargetPort: 9376/TCP NodePort: <unset> 32337/TCP Endpoints: 10.64.0.36:9376,10.64.2.25:9376,10.64.3.23:9376 Session Affinity: ClientIP External Traffic Policy: Local HealthCheck NodePort: 32481 Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 14:59:43.604 STEP: Collecting events from namespace "loadbalancers-9756". 11/25/22 14:59:43.604 STEP: Found 21 events. 11/25/22 14:59:43.68 Nov 25 14:59:43.680: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-lb-esipp-transition-228t6: { } Scheduled: Successfully assigned loadbalancers-9756/affinity-lb-esipp-transition-228t6 to bootstrap-e2e-minion-group-xfgk Nov 25 14:59:43.680: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-lb-esipp-transition-7njl4: { } Scheduled: Successfully assigned loadbalancers-9756/affinity-lb-esipp-transition-7njl4 to bootstrap-e2e-minion-group-cs2j Nov 25 14:59:43.680: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-lb-esipp-transition-mj8kv: { } Scheduled: Successfully assigned loadbalancers-9756/affinity-lb-esipp-transition-mj8kv to bootstrap-e2e-minion-group-nfrc Nov 25 14:59:43.680: INFO: At 2022-11-25 14:58:59 +0000 UTC - event for affinity-lb-esipp-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-transition-228t6 Nov 25 14:59:43.680: INFO: At 2022-11-25 14:58:59 +0000 UTC - event for affinity-lb-esipp-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-transition-7njl4 Nov 25 14:59:43.680: INFO: At 2022-11-25 14:58:59 +0000 UTC - event for affinity-lb-esipp-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-transition-mj8kv Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:30 +0000 UTC - event for affinity-lb-esipp-transition-228t6: {kubelet bootstrap-e2e-minion-group-xfgk} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-k47v6" : failed to sync configmap cache: timed out waiting for the condition Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:30 +0000 UTC - event for affinity-lb-esipp-transition-7njl4: {kubelet bootstrap-e2e-minion-group-cs2j} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-p48kj" : failed to sync configmap cache: timed out waiting for the condition Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:30 +0000 UTC - event for affinity-lb-esipp-transition-mj8kv: {kubelet bootstrap-e2e-minion-group-nfrc} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-gfspg" : failed to sync configmap cache: timed out waiting for the condition Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:32 +0000 UTC - event for affinity-lb-esipp-transition-228t6: {kubelet bootstrap-e2e-minion-group-xfgk} Started: Started container affinity-lb-esipp-transition Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:32 +0000 UTC - event for affinity-lb-esipp-transition-228t6: {kubelet bootstrap-e2e-minion-group-xfgk} Created: Created container affinity-lb-esipp-transition Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:32 +0000 UTC - event for affinity-lb-esipp-transition-228t6: {kubelet bootstrap-e2e-minion-group-xfgk} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:32 +0000 UTC - event for affinity-lb-esipp-transition-7njl4: {kubelet bootstrap-e2e-minion-group-cs2j} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:32 +0000 UTC - event for affinity-lb-esipp-transition-7njl4: {kubelet bootstrap-e2e-minion-group-cs2j} Created: Created container affinity-lb-esipp-transition Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:32 +0000 UTC - event for affinity-lb-esipp-transition-7njl4: {kubelet bootstrap-e2e-minion-group-cs2j} Started: Started container affinity-lb-esipp-transition Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:32 +0000 UTC - event for affinity-lb-esipp-transition-mj8kv: {kubelet bootstrap-e2e-minion-group-nfrc} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:32 +0000 UTC - event for affinity-lb-esipp-transition-mj8kv: {kubelet bootstrap-e2e-minion-group-nfrc} Created: Created container affinity-lb-esipp-transition Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:32 +0000 UTC - event for affinity-lb-esipp-transition-mj8kv: {kubelet bootstrap-e2e-minion-group-nfrc} Started: Started container affinity-lb-esipp-transition Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:33 +0000 UTC - event for affinity-lb-esipp-transition-228t6: {kubelet bootstrap-e2e-minion-group-xfgk} Killing: Stopping container affinity-lb-esipp-transition Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:36 +0000 UTC - event for affinity-lb-esipp-transition-228t6: {kubelet bootstrap-e2e-minion-group-xfgk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 14:59:43.680: INFO: At 2022-11-25 14:59:42 +0000 UTC - event for affinity-lb-esipp-transition-228t6: {kubelet bootstrap-e2e-minion-group-xfgk} BackOff: Back-off restarting failed container affinity-lb-esipp-transition in pod affinity-lb-esipp-transition-228t6_loadbalancers-9756(333ea77e-2052-41cf-9960-0493c67f3067) Nov 25 14:59:43.742: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 14:59:43.742: INFO: affinity-lb-esipp-transition-228t6 bootstrap-e2e-minion-group-xfgk Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:59:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:59:41 +0000 UTC ContainersNotReady containers with unready status: [affinity-lb-esipp-transition]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:59:41 +0000 UTC ContainersNotReady containers with unready status: [affinity-lb-esipp-transition]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:59:28 +0000 UTC }] Nov 25 14:59:43.742: INFO: affinity-lb-esipp-transition-7njl4 bootstrap-e2e-minion-group-cs2j Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:59:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:59:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:59:28 +0000 UTC }] Nov 25 14:59:43.742: INFO: affinity-lb-esipp-transition-mj8kv bootstrap-e2e-minion-group-nfrc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:59:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:59:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 14:59:28 +0000 UTC }] Nov 25 14:59:43.742: INFO: Nov 25 14:59:44.150: INFO: Logging node info for node bootstrap-e2e-master Nov 25 14:59:44.215: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 57fbafcc-fd48-4c2a-b8af-d2f45e071824 638 0 2022-11-25 14:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 14:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 14:55:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 14:55:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 14:55:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:50 +0000 UTC,LastTransitionTime:2022-11-25 14:55:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 14:55:53 +0000 UTC,LastTransitionTime:2022-11-25 14:55:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 14:55:53 +0000 UTC,LastTransitionTime:2022-11-25 14:55:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 14:55:53 +0000 UTC,LastTransitionTime:2022-11-25 14:55:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 14:55:53 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.189.151,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a27504a9a8de9326ab25236db517b6d4,SystemUUID:a27504a9-a8de-9326-ab25-236db517b6d4,BootID:fd4b6e0f-8d3b-43d1-8d87-0b5f34de48b4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 14:59:44.216: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 14:59:44.309: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 14:59:44.434: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:44.434: INFO: Container etcd-container ready: true, restart count 0 Nov 25 14:59:44.434: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:44.434: INFO: Container kube-scheduler ready: true, restart count 3 Nov 25 14:59:44.434: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:44.434: INFO: Container etcd-container ready: true, restart count 0 Nov 25 14:59:44.434: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:44.434: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 25 14:59:44.434: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:44.434: INFO: Container kube-apiserver ready: true, restart count 0 Nov 25 14:59:44.434: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:44.434: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 25 14:59:44.434: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 14:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:44.434: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 25 14:59:44.434: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 14:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:44.434: INFO: Container l7-lb-controller ready: true, restart count 3 Nov 25 14:59:44.434: INFO: metadata-proxy-v0.1-2v8cl started at 2022-11-25 14:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 14:59:44.434: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 14:59:44.434: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 14:59:44.691: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 14:59:44.691: INFO: Logging node info for node bootstrap-e2e-minion-group-cs2j Nov 25 14:59:44.748: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-cs2j 709b4477-dd95-4ae0-b576-f41790f3abc7 2274 0 2022-11-25 14:55:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-cs2j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-cs2j topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4829":"bootstrap-e2e-minion-group-cs2j","csi-hostpath-provisioning-7246":"bootstrap-e2e-minion-group-cs2j"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 14:55:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 14:58:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 14:59:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-cs2j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 14:55:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 14:55:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 14:55:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 14:55:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 14:55:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 14:55:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 14:55:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:37 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 14:59:03 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 14:59:03 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 14:59:03 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 14:59:03 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.154.188,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-cs2j.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-cs2j.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:009dcaae494ddb3388c5512015911a5e,SystemUUID:009dcaae-494d-db33-88c5-512015911a5e,BootID:0ab614df-9d04-456f-9e89-54d5c6a29e6a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 14:59:44.748: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-cs2j Nov 25 14:59:44.813: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-cs2j Nov 25 14:59:45.002: INFO: konnectivity-agent-zd86w started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Container konnectivity-agent ready: true, restart count 3 Nov 25 14:59:45.002: INFO: local-io-client started at 2022-11-25 14:59:38 +0000 UTC (1+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Init container local-io-init ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container local-io-client ready: false, restart count 0 Nov 25 14:59:45.002: INFO: metadata-proxy-v0.1-jj4l2 started at 2022-11-25 14:55:31 +0000 UTC (0+2 container statuses recorded) Nov 25 14:59:45.002: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 14:59:45.002: INFO: coredns-6d97d5ddb-62vqw started at 2022-11-25 14:55:49 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Container coredns ready: false, restart count 3 Nov 25 14:59:45.002: INFO: csi-hostpathplugin-0 started at 2022-11-25 14:59:28 +0000 UTC (0+7 container statuses recorded) Nov 25 14:59:45.002: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container hostpath ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 14:59:45.002: INFO: hostexec-bootstrap-e2e-minion-group-cs2j-n2wrg started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 14:59:45.002: INFO: hostexec-bootstrap-e2e-minion-group-cs2j-8pmc5 started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 14:59:45.002: INFO: pod-ddee8992-7f2b-418d-a1ff-6286a761b8e6 started at 2022-11-25 14:59:39 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Container write-pod ready: false, restart count 0 Nov 25 14:59:45.002: INFO: l7-default-backend-8549d69d99-9c99n started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 14:59:45.002: INFO: kube-dns-autoscaler-5f6455f985-q4zhz started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Container autoscaler ready: true, restart count 4 Nov 25 14:59:45.002: INFO: volume-snapshot-controller-0 started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Container volume-snapshot-controller ready: false, restart count 3 Nov 25 14:59:45.002: INFO: reallocate-nodeport-test-mkwml started at 2022-11-25 14:58:49 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Container netexec ready: true, restart count 1 Nov 25 14:59:45.002: INFO: hostpath-symlink-prep-provisioning-5265 started at <nil> (0+0 container statuses recorded) Nov 25 14:59:45.002: INFO: coredns-6d97d5ddb-gzrc5 started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Container coredns ready: true, restart count 1 Nov 25 14:59:45.002: INFO: pod-subpath-test-inlinevolume-chp6 started at 2022-11-25 14:58:30 +0000 UTC (1+2 container statuses recorded) Nov 25 14:59:45.002: INFO: Init container init-volume-inlinevolume-chp6 ready: true, restart count 1 Nov 25 14:59:45.002: INFO: Container test-container-subpath-inlinevolume-chp6 ready: true, restart count 1 Nov 25 14:59:45.002: INFO: Container test-container-volume-inlinevolume-chp6 ready: true, restart count 1 Nov 25 14:59:45.002: INFO: affinity-lb-esipp-transition-7njl4 started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Container affinity-lb-esipp-transition ready: true, restart count 0 Nov 25 14:59:45.002: INFO: csi-hostpathplugin-0 started at 2022-11-25 14:58:32 +0000 UTC (0+7 container statuses recorded) Nov 25 14:59:45.002: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container hostpath ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 14:59:45.002: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 14:59:45.002: INFO: nfs-io-client started at 2022-11-25 14:59:28 +0000 UTC (1+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Init container nfs-io-init ready: false, restart count 0 Nov 25 14:59:45.002: INFO: Container nfs-io-client ready: false, restart count 0 Nov 25 14:59:45.002: INFO: kube-proxy-bootstrap-e2e-minion-group-cs2j started at 2022-11-25 14:55:30 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:45.002: INFO: Container kube-proxy ready: true, restart count 1 Nov 25 14:59:48.174: INFO: Latency metrics for node bootstrap-e2e-minion-group-cs2j Nov 25 14:59:48.174: INFO: Logging node info for node bootstrap-e2e-minion-group-nfrc Nov 25 14:59:48.244: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-nfrc 32e3ddf0-9230-4008-a6d2-35385dd6942e 2384 0 2022-11-25 14:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-nfrc kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-nfrc topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 14:55:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 14:55:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 14:59:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-nfrc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 14:55:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 14:55:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 14:55:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 14:55:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 14:55:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 14:55:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 14:55:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:50 +0000 UTC,LastTransitionTime:2022-11-25 14:55:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 14:59:40 +0000 UTC,LastTransitionTime:2022-11-25 14:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 14:59:40 +0000 UTC,LastTransitionTime:2022-11-25 14:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 14:59:40 +0000 UTC,LastTransitionTime:2022-11-25 14:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 14:59:40 +0000 UTC,LastTransitionTime:2022-11-25 14:55:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.169.41,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-nfrc.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-nfrc.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:584471f9c540880f2541598af76fd197,SystemUUID:584471f9-c540-880f-2541-598af76fd197,BootID:925b3820-ba2a-4f24-949e-2611ee406076,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-8209^ad4cfbc5-6cd1-11ed-9cc2-ea835e3ab61a kubernetes.io/csi/csi-hostpath-multivolume-8209^ae9f3a1c-6cd1-11ed-9cc2-ea835e3ab61a],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 14:59:48.244: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-nfrc Nov 25 14:59:48.356: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-nfrc Nov 25 14:59:48.611: INFO: konnectivity-agent-2vkfh started at 2022-11-25 14:55:50 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container konnectivity-agent ready: true, restart count 1 Nov 25 14:59:48.611: INFO: nfs-server started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container nfs-server ready: true, restart count 0 Nov 25 14:59:48.611: INFO: pod-74aca48d-b1cc-47b2-a607-2327728b5c63 started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container write-pod ready: false, restart count 0 Nov 25 14:59:48.611: INFO: metadata-proxy-v0.1-rfhls started at 2022-11-25 14:55:36 +0000 UTC (0+2 container statuses recorded) Nov 25 14:59:48.611: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 14:59:48.611: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 14:59:48.611: INFO: pod-subpath-test-preprovisionedpv-7q4b started at 2022-11-25 14:59:28 +0000 UTC (1+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Init container init-volume-preprovisionedpv-7q4b ready: true, restart count 0 Nov 25 14:59:48.611: INFO: Container test-container-subpath-preprovisionedpv-7q4b ready: false, restart count 0 Nov 25 14:59:48.611: INFO: hostexec-bootstrap-e2e-minion-group-nfrc-z4jp4 started at 2022-11-25 14:58:30 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 14:59:48.611: INFO: hostexec-bootstrap-e2e-minion-group-nfrc-f2gt4 started at <nil> (0+0 container statuses recorded) Nov 25 14:59:48.611: INFO: hostexec-bootstrap-e2e-minion-group-nfrc-lpq6h started at 2022-11-25 14:58:46 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 14:59:48.611: INFO: hostexec-bootstrap-e2e-minion-group-nfrc-tlskz started at 2022-11-25 14:58:47 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 14:59:48.611: INFO: pod-ec7383ae-4156-48b9-aea4-2c6597723edd started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container write-pod ready: false, restart count 0 Nov 25 14:59:48.611: INFO: hostexec-bootstrap-e2e-minion-group-nfrc-h6jtn started at 2022-11-25 14:58:31 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 14:59:48.611: INFO: hostexec-bootstrap-e2e-minion-group-nfrc-8qwt6 started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 14:59:48.611: INFO: kube-proxy-bootstrap-e2e-minion-group-nfrc started at 2022-11-25 14:55:35 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container kube-proxy ready: true, restart count 2 Nov 25 14:59:48.611: INFO: affinity-lb-esipp-transition-mj8kv started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container affinity-lb-esipp-transition ready: true, restart count 0 Nov 25 14:59:48.611: INFO: external-provisioner-626zt started at 2022-11-25 14:58:30 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 25 14:59:48.611: INFO: csi-hostpathplugin-0 started at 2022-11-25 14:58:44 +0000 UTC (0+7 container statuses recorded) Nov 25 14:59:48.611: INFO: Container csi-attacher ready: false, restart count 2 Nov 25 14:59:48.611: INFO: Container csi-provisioner ready: false, restart count 2 Nov 25 14:59:48.611: INFO: Container csi-resizer ready: false, restart count 2 Nov 25 14:59:48.611: INFO: Container csi-snapshotter ready: false, restart count 2 Nov 25 14:59:48.611: INFO: Container hostpath ready: false, restart count 2 Nov 25 14:59:48.611: INFO: Container liveness-probe ready: false, restart count 2 Nov 25 14:59:48.611: INFO: Container node-driver-registrar ready: false, restart count 2 Nov 25 14:59:48.611: INFO: pod-acfbc4f2-eb46-487c-beec-554254dadba8 started at 2022-11-25 14:59:43 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:48.611: INFO: Container write-pod ready: true, restart count 0 Nov 25 14:59:49.450: INFO: Latency metrics for node bootstrap-e2e-minion-group-nfrc Nov 25 14:59:49.450: INFO: Logging node info for node bootstrap-e2e-minion-group-xfgk Nov 25 14:59:49.590: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-xfgk ba54c0d2-29af-426e-a049-7278d60a9490 2133 0 2022-11-25 14:55:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-xfgk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-xfgk topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5560":"bootstrap-e2e-minion-group-xfgk","csi-mock-csi-mock-volumes-290":"csi-mock-csi-mock-volumes-290"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 14:55:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 14:58:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 14:59:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-xfgk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 14:55:38 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:50 +0000 UTC,LastTransitionTime:2022-11-25 14:55:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 14:59:08 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 14:59:08 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 14:59:08 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 14:59:08 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.196.45,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-xfgk.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-xfgk.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:35564864f08206045e292b7e32d4bbba,SystemUUID:35564864-f082-0604-5e29-2b7e32d4bbba,BootID:303b460c-3762-4624-8d44-d7a3124b5e6c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-5560^a7b41a64-6cd1-11ed-90f7-ee2d44c6e29f],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-5560^a7b41a64-6cd1-11ed-90f7-ee2d44c6e29f,DevicePath:,},},Config:nil,},} Nov 25 14:59:49.590: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-xfgk Nov 25 14:59:49.662: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-xfgk Nov 25 14:59:49.815: INFO: metadata-proxy-v0.1-nfk54 started at 2022-11-25 14:55:35 +0000 UTC (0+2 container statuses recorded) Nov 25 14:59:49.815: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 14:59:49.815: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 14:59:49.815: INFO: metrics-server-v0.5.2-867b8754b9-4d9k2 started at 2022-11-25 14:55:55 +0000 UTC (0+2 container statuses recorded) Nov 25 14:59:49.815: INFO: Container metrics-server ready: false, restart count 2 Nov 25 14:59:49.815: INFO: Container metrics-server-nanny ready: false, restart count 3 Nov 25 14:59:49.815: INFO: test-hostpath-type-4d99c started at 2022-11-25 14:59:37 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:49.815: INFO: Container host-path-sh-testing ready: true, restart count 0 Nov 25 14:59:49.815: INFO: csi-hostpathplugin-0 started at 2022-11-25 14:59:48 +0000 UTC (0+7 container statuses recorded) Nov 25 14:59:49.815: INFO: Container csi-attacher ready: false, restart count 0 Nov 25 14:59:49.815: INFO: Container csi-provisioner ready: false, restart count 0 Nov 25 14:59:49.815: INFO: Container csi-resizer ready: false, restart count 0 Nov 25 14:59:49.815: INFO: Container csi-snapshotter ready: false, restart count 0 Nov 25 14:59:49.815: INFO: Container hostpath ready: false, restart count 0 Nov 25 14:59:49.815: INFO: Container liveness-probe ready: false, restart count 0 Nov 25 14:59:49.815: INFO: Container node-driver-registrar ready: false, restart count 0 Nov 25 14:59:49.815: INFO: csi-hostpathplugin-0 started at 2022-11-25 14:58:32 +0000 UTC (0+7 container statuses recorded) Nov 25 14:59:49.815: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 14:59:49.815: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 14:59:49.815: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 14:59:49.815: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 14:59:49.815: INFO: Container hostpath ready: true, restart count 2 Nov 25 14:59:49.815: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 14:59:49.815: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 14:59:49.815: INFO: pod-590f7d35-2f3d-495d-bd05-1b5354a0e9cc started at 2022-11-25 14:58:45 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:49.815: INFO: Container write-pod ready: false, restart count 0 Nov 25 14:59:49.815: INFO: konnectivity-agent-sz497 started at 2022-11-25 14:55:50 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:49.815: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 25 14:59:49.815: INFO: pod-configmaps-04565d9c-c879-4e8e-9fe4-0833d5d0f610 started at 2022-11-25 14:58:30 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:49.815: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 14:59:49.815: INFO: kube-proxy-bootstrap-e2e-minion-group-xfgk started at 2022-11-25 14:55:34 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:49.815: INFO: Container kube-proxy ready: false, restart count 2 Nov 25 14:59:49.815: INFO: hostpath-symlink-prep-provisioning-8345 started at 2022-11-25 14:59:45 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:49.815: INFO: Container init-volume-provisioning-8345 ready: false, restart count 0 Nov 25 14:59:49.815: INFO: hostexec-bootstrap-e2e-minion-group-xfgk-x8ttp started at 2022-11-25 14:58:51 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:49.815: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 14:59:49.815: INFO: affinity-lb-esipp-transition-228t6 started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:49.815: INFO: Container affinity-lb-esipp-transition ready: false, restart count 1 Nov 25 14:59:49.815: INFO: var-expansion-39f058ab-2eab-4367-85ce-d5109afbf080 started at 2022-11-25 14:58:30 +0000 UTC (0+1 container statuses recorded) Nov 25 14:59:49.815: INFO: Container dapi-container ready: false, restart count 0 Nov 25 14:59:49.815: INFO: csi-mockplugin-0 started at 2022-11-25 14:58:32 +0000 UTC (0+4 container statuses recorded) Nov 25 14:59:49.815: INFO: Container busybox ready: true, restart count 1 Nov 25 14:59:49.815: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 14:59:49.815: INFO: Container driver-registrar ready: true, restart count 1 Nov 25 14:59:49.815: INFO: Container mock ready: true, restart count 1 Nov 25 14:59:50.531: INFO: Latency metrics for node bootstrap-e2e-minion-group-xfgk [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-9756" for this suite. 11/25/22 14:59:50.531
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shandle\sload\sbalancer\scleanup\sfinalizer\sfor\sservice\s\[Slow\]$'
test/e2e/framework/service/wait.go:79 k8s.io/kubernetes/test/e2e/framework/service.WaitForServiceUpdatedWithFinalizer({0x801de88?, 0xc0025004e0}, {0xc002d41e30, 0x12}, {0xc004817a70, 0xc}, 0x0) test/e2e/framework/service/wait.go:79 +0x1e7 k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:842 +0x1f7 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:00:03.933: failed to list events in namespace "loadbalancers-6141": Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-6141/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:00:03.973: Couldn't delete ns: "loadbalancers-6141": Delete "https://34.82.189.151/api/v1/namespaces/loadbalancers-6141": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/loadbalancers-6141", Err:(*net.OpError)(0xc002ceb2c0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 14:58:45.994 Nov 25 14:58:45.994: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 14:58:45.996 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 14:58:46.128 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 14:58:46.209 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should handle load balancer cleanup finalizer for service [Slow] test/e2e/network/loadbalancer.go:818 STEP: Create load balancer service 11/25/22 14:58:46.333 STEP: Wait for load balancer to serve traffic 11/25/22 14:58:46.385 Nov 25 14:58:46.436: INFO: Waiting up to 15m0s for service "lb-finalizer" to have a LoadBalancer STEP: Check if finalizer presents on service with type=LoadBalancer 11/25/22 14:59:22.549 STEP: Wait for service to hasFinalizer=true 11/25/22 14:59:22.55 STEP: Check if finalizer is removed on service after changed to type=ClusterIP 11/25/22 14:59:22.734 Nov 25 14:59:23.016: INFO: Waiting up to 15m0s for service "lb-finalizer" to have no LoadBalancer STEP: Wait for service to hasFinalizer=false 11/25/22 14:59:33.465 Nov 25 14:59:33.579: INFO: Service loadbalancers-6141/lb-finalizer hasFinalizer=true, want false Nov 25 15:00:03.619: FAIL: Failed to wait for service to hasFinalizer=false: Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-6141/services/lb-finalizer": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/service.WaitForServiceUpdatedWithFinalizer({0x801de88?, 0xc0025004e0}, {0xc002d41e30, 0x12}, {0xc004817a70, 0xc}, 0x0) test/e2e/framework/service/wait.go:79 +0x1e7 k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:842 +0x1f7 STEP: Check that service can be deleted with finalizer 11/25/22 15:00:03.619 STEP: Delete service with finalizer 11/25/22 15:00:03.619 Nov 25 15:00:03.658: FAIL: Failed to delete service loadbalancers-6141/lb-finalizer Full Stack Trace k8s.io/kubernetes/test/e2e/framework/service.WaitForServiceDeletedWithFinalizer({0x801de88, 0xc0025004e0}, {0xc002d41e30, 0x12}, {0xc004817a70, 0xc}) test/e2e/framework/service/wait.go:37 +0x185 k8s.io/kubernetes/test/e2e/network.glob..func19.12.2() test/e2e/network/loadbalancer.go:829 +0x67 panic({0x70eb7e0, 0xc0009ed030}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Failf({0x76dac78?, 0xc0025004e0?}, {0xc003e1fec0?, 0x0?, 0x1?}) test/e2e/framework/log.go:49 +0x12c k8s.io/kubernetes/test/e2e/framework/service.WaitForServiceUpdatedWithFinalizer({0x801de88?, 0xc0025004e0}, {0xc002d41e30, 0x12}, {0xc004817a70, 0xc}, 0x0) test/e2e/framework/service/wait.go:79 +0x1e7 k8s.io/kubernetes/test/e2e/network.glob..func19.12() test/e2e/network/loadbalancer.go:842 +0x1f7 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 15:00:03.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 15:00:03.699: INFO: Output of kubectl describe svc: Nov 25 15:00:03.699: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6141 describe svc --namespace=loadbalancers-6141' Nov 25 15:00:03.891: INFO: rc: 1 Nov 25 15:00:03.891: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:00:03.892 STEP: Collecting events from namespace "loadbalancers-6141". 11/25/22 15:00:03.892 Nov 25 15:00:03.933: INFO: Unexpected error: failed to list events in namespace "loadbalancers-6141": <*url.Error | 0xc0020ad380>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/loadbalancers-6141/events", Err: <*net.OpError | 0xc003d52370>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00299de00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00127e0e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:00:03.933: FAIL: failed to list events in namespace "loadbalancers-6141": Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-6141/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00372a5c0, {0xc0048cc6f0, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0025004e0}, {0xc0048cc6f0, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00372a650?, {0xc0048cc6f0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0012964b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00119e750?, 0xc003e99f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00119e750?, 0x7fadfa0?}, {0xae73300?, 0xc003e99f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-6141" for this suite. 11/25/22 15:00:03.934 Nov 25 15:00:03.973: FAIL: Couldn't delete ns: "loadbalancers-6141": Delete "https://34.82.189.151/api/v1/namespaces/loadbalancers-6141": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/loadbalancers-6141", Err:(*net.OpError)(0xc002ceb2c0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0012964b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00119e680?, 0xc0000cdfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00119e680?, 0x0?}, {0xae73300?, 0x5?, 0xc0048cd920?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shave\ssession\saffinity\swork\sfor\sLoadBalancer\sservice\swith\sESIPP\soff\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/network/service.go:3978 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x75cdc0f?, {0x801de88, 0xc002b6e340}, 0xc000931680, 0x0) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBService(...) test/e2e/network/service.go:3966 k8s.io/kubernetes/test/e2e/network.glob..func19.10() test/e2e/network/loadbalancer.go:798 +0xf0 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:01:54.409: Couldn't delete ns: "loadbalancers-2366": Delete "https://34.82.189.151/api/v1/namespaces/loadbalancers-2366": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/loadbalancers-2366", Err:(*net.OpError)(0xc002de27d0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 14:59:57.458 Nov 25 14:59:57.458: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 14:59:57.461 Nov 25 14:59:57.501: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 14:59:59.541: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:01.541: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:03.542: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:05.542: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:07.541: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:09.541: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:11.541: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:13.541: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:15.542: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:17.541: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:19.541: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:21.541: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:23.541: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:25.541: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:01:29.005 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:01:29.157 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should have session affinity work for LoadBalancer service with ESIPP off [Slow] [LinuxOnly] test/e2e/network/loadbalancer.go:791 STEP: creating service in namespace loadbalancers-2366 11/25/22 15:01:29.519 STEP: creating service affinity-lb in namespace loadbalancers-2366 11/25/22 15:01:29.52 STEP: creating replication controller affinity-lb in namespace loadbalancers-2366 11/25/22 15:01:29.655 I1125 15:01:29.700251 10185 runners.go:193] Created replication controller with name: affinity-lb, namespace: loadbalancers-2366, replica count: 3 I1125 15:01:32.751522 10185 runners.go:193] affinity-lb Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 15:01:35.752275 10185 runners.go:193] affinity-lb Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 15:01:38.753424 10185 runners.go:193] affinity-lb Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 15:01:41.753555 10185 runners.go:193] affinity-lb Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 15:01:41.753574 10185 runners.go:193] Logging node info for node bootstrap-e2e-minion-group-xfgk I1125 15:01:41.807917 10185 runners.go:193] Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-xfgk ba54c0d2-29af-426e-a049-7278d60a9490 2850 0 2022-11-25 14:55:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-xfgk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-xfgk topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5560":"bootstrap-e2e-minion-group-xfgk","csi-hostpath-multivolume-7269":"bootstrap-e2e-minion-group-xfgk"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 14:58:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 15:00:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 15:01:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-xfgk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:39 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:50 +0000 UTC,LastTransitionTime:2022-11-25 14:55:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:00:31 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:00:31 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:00:31 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:00:31 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.196.45,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-xfgk.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-xfgk.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:35564864f08206045e292b7e32d4bbba,SystemUUID:35564864-f082-0604-5e29-2b7e32d4bbba,BootID:303b460c-3762-4624-8d44-d7a3124b5e6c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-5560^a7b41a64-6cd1-11ed-90f7-ee2d44c6e29f],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-5560^a7b41a64-6cd1-11ed-90f7-ee2d44c6e29f,DevicePath:,},},Config:nil,},} I1125 15:01:41.808461 10185 runners.go:193] Logging kubelet events for node bootstrap-e2e-minion-group-xfgk I1125 15:01:41.874793 10185 runners.go:193] Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-xfgk I1125 15:01:42.109592 10185 runners.go:193] csi-mockplugin-0 started at 2022-11-25 14:58:32 +0000 UTC (0+4 container statuses recorded) I1125 15:01:42.109615 10185 runners.go:193] Container busybox ready: true, restart count 2 I1125 15:01:42.109620 10185 runners.go:193] Container csi-provisioner ready: false, restart count 2 I1125 15:01:42.109624 10185 runners.go:193] Container driver-registrar ready: true, restart count 2 I1125 15:01:42.109634 10185 runners.go:193] Container mock ready: true, restart count 2 I1125 15:01:42.109638 10185 runners.go:193] hostexec-bootstrap-e2e-minion-group-xfgk-xpsv2 started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109644 10185 runners.go:193] Container agnhost-container ready: true, restart count 0 I1125 15:01:42.109648 10185 runners.go:193] hostexec-bootstrap-e2e-minion-group-xfgk-g6lzz started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109653 10185 runners.go:193] Container agnhost-container ready: true, restart count 0 I1125 15:01:42.109657 10185 runners.go:193] var-expansion-39f058ab-2eab-4367-85ce-d5109afbf080 started at 2022-11-25 14:58:30 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109664 10185 runners.go:193] Container dapi-container ready: false, restart count 0 I1125 15:01:42.109668 10185 runners.go:193] hostexec-bootstrap-e2e-minion-group-xfgk-5xt4b started at 2022-11-25 15:01:28 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109673 10185 runners.go:193] Container agnhost-container ready: true, restart count 0 I1125 15:01:42.109677 10185 runners.go:193] hostexec-bootstrap-e2e-minion-group-xfgk-6tq5z started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109682 10185 runners.go:193] Container agnhost-container ready: true, restart count 1 I1125 15:01:42.109685 10185 runners.go:193] csi-hostpathplugin-0 started at 2022-11-25 14:59:48 +0000 UTC (0+7 container statuses recorded) I1125 15:01:42.109690 10185 runners.go:193] Container csi-attacher ready: true, restart count 1 I1125 15:01:42.109694 10185 runners.go:193] Container csi-provisioner ready: true, restart count 1 I1125 15:01:42.109697 10185 runners.go:193] Container csi-resizer ready: true, restart count 1 I1125 15:01:42.109699 10185 runners.go:193] Container csi-snapshotter ready: true, restart count 1 I1125 15:01:42.109701 10185 runners.go:193] Container hostpath ready: true, restart count 1 I1125 15:01:42.109705 10185 runners.go:193] Container liveness-probe ready: true, restart count 1 I1125 15:01:42.109708 10185 runners.go:193] Container node-driver-registrar ready: true, restart count 1 I1125 15:01:42.109712 10185 runners.go:193] csi-hostpathplugin-0 started at 2022-11-25 14:58:32 +0000 UTC (0+7 container statuses recorded) I1125 15:01:42.109717 10185 runners.go:193] Container csi-attacher ready: true, restart count 2 I1125 15:01:42.109721 10185 runners.go:193] Container csi-provisioner ready: true, restart count 2 I1125 15:01:42.109725 10185 runners.go:193] Container csi-resizer ready: true, restart count 2 I1125 15:01:42.109729 10185 runners.go:193] Container csi-snapshotter ready: true, restart count 2 I1125 15:01:42.109732 10185 runners.go:193] Container hostpath ready: true, restart count 2 I1125 15:01:42.109736 10185 runners.go:193] Container liveness-probe ready: true, restart count 2 I1125 15:01:42.109739 10185 runners.go:193] Container node-driver-registrar ready: true, restart count 2 I1125 15:01:42.109743 10185 runners.go:193] pod-590f7d35-2f3d-495d-bd05-1b5354a0e9cc started at 2022-11-25 14:58:45 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109748 10185 runners.go:193] Container write-pod ready: false, restart count 0 I1125 15:01:42.109751 10185 runners.go:193] pvc-volume-tester-vfs8x started at 2022-11-25 14:59:50 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109756 10185 runners.go:193] Container volume-tester ready: false, restart count 0 I1125 15:01:42.109759 10185 runners.go:193] pod-subpath-test-inlinevolume-gcnh started at 2022-11-25 14:59:51 +0000 UTC (1+2 container statuses recorded) I1125 15:01:42.109764 10185 runners.go:193] Init container init-volume-inlinevolume-gcnh ready: true, restart count 1 I1125 15:01:42.109768 10185 runners.go:193] Container test-container-subpath-inlinevolume-gcnh ready: true, restart count 1 I1125 15:01:42.109772 10185 runners.go:193] Container test-container-volume-inlinevolume-gcnh ready: true, restart count 1 I1125 15:01:42.109776 10185 runners.go:193] metadata-proxy-v0.1-nfk54 started at 2022-11-25 14:55:35 +0000 UTC (0+2 container statuses recorded) I1125 15:01:42.109801 10185 runners.go:193] Container metadata-proxy ready: true, restart count 0 I1125 15:01:42.109805 10185 runners.go:193] Container prometheus-to-sd-exporter ready: true, restart count 0 I1125 15:01:42.109810 10185 runners.go:193] metrics-server-v0.5.2-867b8754b9-4d9k2 started at 2022-11-25 14:55:55 +0000 UTC (0+2 container statuses recorded) I1125 15:01:42.109815 10185 runners.go:193] Container metrics-server ready: false, restart count 3 I1125 15:01:42.109819 10185 runners.go:193] Container metrics-server-nanny ready: false, restart count 4 I1125 15:01:42.109822 10185 runners.go:193] pod-configmaps-04565d9c-c879-4e8e-9fe4-0833d5d0f610 started at 2022-11-25 14:58:30 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109827 10185 runners.go:193] Container agnhost-container ready: false, restart count 0 I1125 15:01:42.109831 10185 runners.go:193] konnectivity-agent-sz497 started at 2022-11-25 14:55:50 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109837 10185 runners.go:193] Container konnectivity-agent ready: true, restart count 1 I1125 15:01:42.109840 10185 runners.go:193] affinity-lb-ljvdn started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109845 10185 runners.go:193] Container affinity-lb ready: true, restart count 1 I1125 15:01:42.109850 10185 runners.go:193] netserver-2 started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109855 10185 runners.go:193] Container webserver ready: false, restart count 1 I1125 15:01:42.109858 10185 runners.go:193] hostexec-bootstrap-e2e-minion-group-xfgk-x8ttp started at 2022-11-25 14:58:51 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109864 10185 runners.go:193] Container agnhost-container ready: true, restart count 1 I1125 15:01:42.109868 10185 runners.go:193] affinity-lb-esipp-transition-228t6 started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109873 10185 runners.go:193] Container affinity-lb-esipp-transition ready: true, restart count 2 I1125 15:01:42.109878 10185 runners.go:193] hostexec-bootstrap-e2e-minion-group-xfgk-vl9kh started at 2022-11-25 15:01:30 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109881 10185 runners.go:193] Container agnhost-container ready: true, restart count 0 I1125 15:01:42.109884 10185 runners.go:193] kube-proxy-bootstrap-e2e-minion-group-xfgk started at 2022-11-25 14:55:34 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109886 10185 runners.go:193] Container kube-proxy ready: true, restart count 3 I1125 15:01:42.109888 10185 runners.go:193] hostexec-bootstrap-e2e-minion-group-xfgk-cznn8 started at 2022-11-25 15:01:28 +0000 UTC (0+1 container statuses recorded) I1125 15:01:42.109891 10185 runners.go:193] Container agnhost-container ready: true, restart count 0 I1125 15:01:51.514298 10185 runners.go:193] Latency metrics for node bootstrap-e2e-minion-group-xfgk I1125 15:01:51.647723 10185 runners.go:193] Running kubectl logs on non-ready containers in loadbalancers-2366 Nov 25 15:01:51.647: INFO: Unexpected error: failed to create replication controller with service in the namespace: loadbalancers-2366: <*errors.errorString | 0xc002770f90>: { s: "1 containers failed which is more than allowed 0", } Nov 25 15:01:51.647: FAIL: failed to create replication controller with service in the namespace: loadbalancers-2366: 1 containers failed which is more than allowed 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x75cdc0f?, {0x801de88, 0xc002b6e340}, 0xc000931680, 0x0) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBService(...) test/e2e/network/service.go:3966 k8s.io/kubernetes/test/e2e/network.glob..func19.10() test/e2e/network/loadbalancer.go:798 +0xf0 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 15:01:51.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 15:01:51.708: INFO: Output of kubectl describe svc: Nov 25 15:01:51.708: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2366 describe svc --namespace=loadbalancers-2366' Nov 25 15:01:52.411: INFO: stderr: "" Nov 25 15:01:52.411: INFO: stdout: "Name: affinity-lb\nNamespace: loadbalancers-2366\nLabels: <none>\nAnnotations: <none>\nSelector: name=affinity-lb\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.82.18\nIPs: 10.0.82.18\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nNodePort: <unset> 31138/TCP\nEndpoints: 10.64.0.56:9376,10.64.2.43:9376,10.64.3.44:9376\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents: <none>\n" Nov 25 15:01:52.411: INFO: Name: affinity-lb Namespace: loadbalancers-2366 Labels: <none> Annotations: <none> Selector: name=affinity-lb Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.82.18 IPs: 10.0.82.18 Port: <unset> 80/TCP TargetPort: 9376/TCP NodePort: <unset> 31138/TCP Endpoints: 10.64.0.56:9376,10.64.2.43:9376,10.64.3.44:9376 Session Affinity: ClientIP External Traffic Policy: Cluster Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:01:52.411 STEP: Collecting events from namespace "loadbalancers-2366". 11/25/22 15:01:52.411 STEP: Found 17 events. 11/25/22 15:01:52.492 Nov 25 15:01:52.492: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-lb-ljvdn: { } Scheduled: Successfully assigned loadbalancers-2366/affinity-lb-ljvdn to bootstrap-e2e-minion-group-xfgk Nov 25 15:01:52.492: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-lb-nhvsd: { } Scheduled: Successfully assigned loadbalancers-2366/affinity-lb-nhvsd to bootstrap-e2e-minion-group-nfrc Nov 25 15:01:52.492: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-lb-sx85v: { } Scheduled: Successfully assigned loadbalancers-2366/affinity-lb-sx85v to bootstrap-e2e-minion-group-cs2j Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:29 +0000 UTC - event for affinity-lb: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-ljvdn Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:29 +0000 UTC - event for affinity-lb: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-sx85v Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:29 +0000 UTC - event for affinity-lb: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-nhvsd Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:31 +0000 UTC - event for affinity-lb-sx85v: {kubelet bootstrap-e2e-minion-group-cs2j} Created: Created container affinity-lb Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:31 +0000 UTC - event for affinity-lb-sx85v: {kubelet bootstrap-e2e-minion-group-cs2j} Started: Started container affinity-lb Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:31 +0000 UTC - event for affinity-lb-sx85v: {kubelet bootstrap-e2e-minion-group-cs2j} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:32 +0000 UTC - event for affinity-lb-ljvdn: {kubelet bootstrap-e2e-minion-group-xfgk} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:32 +0000 UTC - event for affinity-lb-ljvdn: {kubelet bootstrap-e2e-minion-group-xfgk} Created: Created container affinity-lb Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:32 +0000 UTC - event for affinity-lb-ljvdn: {kubelet bootstrap-e2e-minion-group-xfgk} Started: Started container affinity-lb Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:32 +0000 UTC - event for affinity-lb-nhvsd: {kubelet bootstrap-e2e-minion-group-nfrc} Created: Created container affinity-lb Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:32 +0000 UTC - event for affinity-lb-nhvsd: {kubelet bootstrap-e2e-minion-group-nfrc} Started: Started container affinity-lb Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:32 +0000 UTC - event for affinity-lb-nhvsd: {kubelet bootstrap-e2e-minion-group-nfrc} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:34 +0000 UTC - event for affinity-lb-ljvdn: {kubelet bootstrap-e2e-minion-group-xfgk} Killing: Stopping container affinity-lb Nov 25 15:01:52.492: INFO: At 2022-11-25 15:01:37 +0000 UTC - event for affinity-lb-ljvdn: {kubelet bootstrap-e2e-minion-group-xfgk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 15:01:52.573: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 15:01:52.573: INFO: affinity-lb-ljvdn bootstrap-e2e-minion-group-xfgk Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:01:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:01:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:01:29 +0000 UTC }] Nov 25 15:01:52.573: INFO: affinity-lb-nhvsd bootstrap-e2e-minion-group-nfrc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:01:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:01:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:01:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:01:29 +0000 UTC }] Nov 25 15:01:52.573: INFO: affinity-lb-sx85v bootstrap-e2e-minion-group-cs2j Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:01:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:01:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:01:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:01:29 +0000 UTC }] Nov 25 15:01:52.573: INFO: Nov 25 15:01:53.026: INFO: Logging node info for node bootstrap-e2e-master Nov 25 15:01:53.105: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 57fbafcc-fd48-4c2a-b8af-d2f45e071824 2855 0 2022-11-25 14:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 14:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 14:55:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 14:55:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 15:01:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:50 +0000 UTC,LastTransitionTime:2022-11-25 14:55:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:01:02 +0000 UTC,LastTransitionTime:2022-11-25 14:55:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:01:02 +0000 UTC,LastTransitionTime:2022-11-25 14:55:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:01:02 +0000 UTC,LastTransitionTime:2022-11-25 14:55:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:01:02 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.189.151,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a27504a9a8de9326ab25236db517b6d4,SystemUUID:a27504a9-a8de-9326-ab25-236db517b6d4,BootID:fd4b6e0f-8d3b-43d1-8d87-0b5f34de48b4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 15:01:53.105: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 15:01:53.211: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 15:01:53.392: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:53.392: INFO: Container etcd-container ready: true, restart count 0 Nov 25 15:01:53.392: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:53.392: INFO: Container kube-scheduler ready: true, restart count 4 Nov 25 15:01:53.392: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:53.392: INFO: Container kube-apiserver ready: true, restart count 1 Nov 25 15:01:53.392: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:53.392: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 25 15:01:53.392: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 14:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:53.392: INFO: Container kube-addon-manager ready: true, restart count 1 Nov 25 15:01:53.392: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 14:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:53.392: INFO: Container l7-lb-controller ready: false, restart count 4 Nov 25 15:01:53.392: INFO: metadata-proxy-v0.1-2v8cl started at 2022-11-25 14:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 15:01:53.392: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 15:01:53.392: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 15:01:53.392: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:53.392: INFO: Container etcd-container ready: true, restart count 0 Nov 25 15:01:53.392: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 14:54:48 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:53.392: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 25 15:01:53.711: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 15:01:53.711: INFO: Logging node info for node bootstrap-e2e-minion-group-cs2j Nov 25 15:01:53.781: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-cs2j 709b4477-dd95-4ae0-b576-f41790f3abc7 3429 0 2022-11-25 14:55:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-cs2j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-cs2j topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4829":"bootstrap-e2e-minion-group-cs2j","csi-hostpath-provisioning-7246":"bootstrap-e2e-minion-group-cs2j","csi-hostpath-provisioning-9023":"bootstrap-e2e-minion-group-cs2j"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 15:00:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 15:01:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 15:01:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-cs2j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 15:00:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:37 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:01:46 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:01:46 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:01:46 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:01:46 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.154.188,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-cs2j.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-cs2j.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:009dcaae494ddb3388c5512015911a5e,SystemUUID:009dcaae-494d-db33-88c5-512015911a5e,BootID:0ab614df-9d04-456f-9e89-54d5c6a29e6a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9023^0ea6cf48-6cd2-11ed-b9eb-96728155b2c0],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9023^0ea6cf48-6cd2-11ed-b9eb-96728155b2c0,DevicePath:,},},Config:nil,},} Nov 25 15:01:53.782: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-cs2j Nov 25 15:01:53.848: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-cs2j Nov 25 15:01:54.054: INFO: csi-hostpathplugin-0 started at 2022-11-25 14:58:32 +0000 UTC (0+7 container statuses recorded) Nov 25 15:01:54.054: INFO: Container csi-attacher ready: true, restart count 1 Nov 25 15:01:54.054: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 15:01:54.054: INFO: Container csi-resizer ready: true, restart count 1 Nov 25 15:01:54.054: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 25 15:01:54.054: INFO: Container hostpath ready: true, restart count 1 Nov 25 15:01:54.054: INFO: Container liveness-probe ready: true, restart count 1 Nov 25 15:01:54.054: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 25 15:01:54.054: INFO: nfs-io-client started at 2022-11-25 14:59:28 +0000 UTC (1+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Init container nfs-io-init ready: true, restart count 0 Nov 25 15:01:54.054: INFO: Container nfs-io-client ready: false, restart count 0 Nov 25 15:01:54.054: INFO: kube-proxy-bootstrap-e2e-minion-group-cs2j started at 2022-11-25 14:55:30 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container kube-proxy ready: true, restart count 3 Nov 25 15:01:54.054: INFO: pod-subpath-test-preprovisionedpv-phnq started at 2022-11-25 15:01:45 +0000 UTC (1+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Init container init-volume-preprovisionedpv-phnq ready: true, restart count 0 Nov 25 15:01:54.054: INFO: Container test-container-subpath-preprovisionedpv-phnq ready: false, restart count 0 Nov 25 15:01:54.054: INFO: hostexec-bootstrap-e2e-minion-group-cs2j-kzgc5 started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:01:54.054: INFO: hostexec-bootstrap-e2e-minion-group-cs2j-w5p2t started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:01:54.054: INFO: affinity-lb-sx85v started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container affinity-lb ready: true, restart count 0 Nov 25 15:01:54.054: INFO: metadata-proxy-v0.1-jj4l2 started at 2022-11-25 14:55:31 +0000 UTC (0+2 container statuses recorded) Nov 25 15:01:54.054: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 15:01:54.054: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 15:01:54.054: INFO: konnectivity-agent-zd86w started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container konnectivity-agent ready: true, restart count 3 Nov 25 15:01:54.054: INFO: hostexec-bootstrap-e2e-minion-group-cs2j-grzvg started at 2022-11-25 15:01:41 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 15:01:54.054: INFO: pod-d4e49fe6-cb19-4441-805a-ab6bcf78fefc started at 2022-11-25 14:59:51 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:01:54.054: INFO: hostexec-bootstrap-e2e-minion-group-cs2j-n2wrg started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 15:01:54.054: INFO: hostexec-bootstrap-e2e-minion-group-cs2j-8pmc5 started at 2022-11-25 14:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 15:01:54.054: INFO: pod-ddee8992-7f2b-418d-a1ff-6286a761b8e6 started at 2022-11-25 14:59:39 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:01:54.054: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:01:31 +0000 UTC (0+7 container statuses recorded) Nov 25 15:01:54.054: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 15:01:54.054: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 15:01:54.054: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 15:01:54.054: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 15:01:54.054: INFO: Container hostpath ready: true, restart count 0 Nov 25 15:01:54.054: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 15:01:54.054: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 15:01:54.054: INFO: l7-default-backend-8549d69d99-9c99n started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 15:01:54.054: INFO: coredns-6d97d5ddb-62vqw started at 2022-11-25 14:55:49 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container coredns ready: false, restart count 4 Nov 25 15:01:54.054: INFO: csi-hostpathplugin-0 started at 2022-11-25 14:59:28 +0000 UTC (0+7 container statuses recorded) Nov 25 15:01:54.054: INFO: Container csi-attacher ready: true, restart count 1 Nov 25 15:01:54.054: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 15:01:54.054: INFO: Container csi-resizer ready: true, restart count 1 Nov 25 15:01:54.054: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 25 15:01:54.054: INFO: Container hostpath ready: true, restart count 1 Nov 25 15:01:54.054: INFO: Container liveness-probe ready: true, restart count 1 Nov 25 15:01:54.054: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 25 15:01:54.054: INFO: pod-subpath-test-preprovisionedpv-8bdm started at 2022-11-25 15:01:45 +0000 UTC (1+2 container statuses recorded) Nov 25 15:01:54.054: INFO: Init container init-volume-preprovisionedpv-8bdm ready: true, restart count 0 Nov 25 15:01:54.054: INFO: Container test-container-subpath-preprovisionedpv-8bdm ready: true, restart count 0 Nov 25 15:01:54.054: INFO: Container test-container-volume-preprovisionedpv-8bdm ready: true, restart count 0 Nov 25 15:01:54.054: INFO: reallocate-nodeport-test-mkwml started at 2022-11-25 14:58:49 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container netexec ready: true, restart count 3 Nov 25 15:01:54.054: INFO: netserver-0 started at 2022-11-25 15:01:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container webserver ready: false, restart count 0 Nov 25 15:01:54.054: INFO: pod-subpath-test-dynamicpv-z4lq started at 2022-11-25 15:01:38 +0000 UTC (1+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Init container init-volume-dynamicpv-z4lq ready: true, restart count 0 Nov 25 15:01:54.054: INFO: Container test-container-subpath-dynamicpv-z4lq ready: false, restart count 0 Nov 25 15:01:54.054: INFO: coredns-6d97d5ddb-gzrc5 started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container coredns ready: false, restart count 4 Nov 25 15:01:54.054: INFO: kube-dns-autoscaler-5f6455f985-q4zhz started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container autoscaler ready: false, restart count 4 Nov 25 15:01:54.054: INFO: volume-snapshot-controller-0 started at 2022-11-25 14:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 15:01:54.054: INFO: Container volume-snapshot-controller ready: true, restart count 5 Nov 25 15:01:54.129: INFO: Logging node info for node bootstrap-e2e-minion-group-nfrc Nov 25 15:01:54.169: INFO: Error getting node info Get "https://34.82.189.151/api/v1/nodes/bootstrap-e2e-minion-group-nfrc": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:01:54.169: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 15:01:54.170: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-nfrc Nov 25 15:01:54.212: INFO: Unexpected error retrieving node events Get "https://34.82.189.151/api/v1/namespaces/kube-system/events?fieldSelector=involvedObject.name%3Dbootstrap-e2e-minion-group-nfrc%2CinvolvedObject.kind%3DNode%2CinvolvedObject.namespace%3D%2Csource%3Dkubelet": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:01:54.212: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-nfrc Nov 25 15:01:54.251: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-nfrc: Get "https://34.82.189.151/api/v1/nodes/bootstrap-e2e-minion-group-nfrc:10250/proxy/pods": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:01:54.251: INFO: Logging node info for node bootstrap-e2e-minion-group-xfgk Nov 25 15:01:54.290: INFO: Error getting node info Get "https://34.82.189.151/api/v1/nodes/bootstrap-e2e-minion-group-xfgk": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:01:54.290: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 15:01:54.290: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-xfgk Nov 25 15:01:54.330: INFO: Unexpected error retrieving node events Get "https://34.82.189.151/api/v1/namespaces/kube-system/events?fieldSelector=involvedObject.kind%3DNode%2CinvolvedObject.namespace%3D%2Csource%3Dkubelet%2CinvolvedObject.name%3Dbootstrap-e2e-minion-group-xfgk": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:01:54.330: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-xfgk Nov 25 15:01:54.369: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-xfgk: Get "https://34.82.189.151/api/v1/nodes/bootstrap-e2e-minion-group-xfgk:10250/proxy/pods": dial tcp 34.82.189.151:443: connect: connection refused [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-2366" for this suite. 11/25/22 15:01:54.369 Nov 25 15:01:54.409: FAIL: Couldn't delete ns: "loadbalancers-2366": Delete "https://34.82.189.151/api/v1/namespaces/loadbalancers-2366": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/loadbalancers-2366", Err:(*net.OpError)(0xc002de27d0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0013984b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0012c7e20?, 0xc000d6bfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0012c7e20?, 0x0?}, {0xae73300?, 0x5?, 0xc003501980?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shave\ssession\saffinity\swork\sfor\sLoadBalancer\sservice\swith\sESIPP\son\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/network/service.go:3978 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x75eccfc?, {0x801de88, 0xc002ff8340}, 0xc004734500, 0x0) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBService(...) test/e2e/network/service.go:3966 k8s.io/kubernetes/test/e2e/network.glob..func19.8() test/e2e/network/loadbalancer.go:776 +0xf0from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:13:30.438 Nov 25 15:13:30.438: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 15:13:30.44 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:13:30.669 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:13:30.763 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should have session affinity work for LoadBalancer service with ESIPP on [Slow] [LinuxOnly] test/e2e/network/loadbalancer.go:769 STEP: creating service in namespace loadbalancers-6287 11/25/22 15:13:30.96 STEP: creating service affinity-lb-esipp in namespace loadbalancers-6287 11/25/22 15:13:30.96 STEP: creating replication controller affinity-lb-esipp in namespace loadbalancers-6287 11/25/22 15:13:31.215 I1125 15:13:31.311969 10238 runners.go:193] Created replication controller with name: affinity-lb-esipp, namespace: loadbalancers-6287, replica count: 3 I1125 15:13:34.412951 10238 runners.go:193] affinity-lb-esipp Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 15:13:37.413766 10238 runners.go:193] affinity-lb-esipp Pods: 3 out of 3 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1125 15:13:40.414395 10238 runners.go:193] affinity-lb-esipp Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 15:13:40.414414 10238 runners.go:193] Logging node info for node bootstrap-e2e-minion-group-cs2j I1125 15:13:40.523133 10238 runners.go:193] Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-cs2j 709b4477-dd95-4ae0-b576-f41790f3abc7 8620 0 2022-11-25 14:55:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-cs2j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-cs2j topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4829":"bootstrap-e2e-minion-group-cs2j","csi-hostpath-provisioning-7246":"bootstrap-e2e-minion-group-cs2j","csi-hostpath-provisioning-9023":"bootstrap-e2e-minion-group-cs2j","csi-hostpath-provisioning-9182":"bootstrap-e2e-minion-group-cs2j","csi-mock-csi-mock-volumes-7581":"csi-mock-csi-mock-volumes-7581"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 15:10:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 15:12:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 15:13:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-cs2j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:37 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:13:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:13:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:13:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:13:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.154.188,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-cs2j.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-cs2j.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:009dcaae494ddb3388c5512015911a5e,SystemUUID:009dcaae-494d-db33-88c5-512015911a5e,BootID:0ab614df-9d04-456f-9e89-54d5c6a29e6a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9023^0ea6cf48-6cd2-11ed-b9eb-96728155b2c0],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9023^0ea6cf48-6cd2-11ed-b9eb-96728155b2c0,DevicePath:,},},Config:nil,},} I1125 15:13:40.523597 10238 runners.go:193] Logging kubelet events for node bootstrap-e2e-minion-group-cs2j I1125 15:13:40.594789 10238 runners.go:193] Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-cs2j I1125 15:13:40.744634 10238 runners.go:193] Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-cs2j: error trying to reach service: No agent available I1125 15:13:40.831471 10238 runners.go:193] Running kubectl logs on non-ready containers in loadbalancers-6287 Nov 25 15:13:40.831: INFO: Unexpected error: failed to create replication controller with service in the namespace: loadbalancers-6287: <*errors.errorString | 0xc0047f32f0>: { s: "1 containers failed which is more than allowed 0", } Nov 25 15:13:40.831: FAIL: failed to create replication controller with service in the namespace: loadbalancers-6287: 1 containers failed which is more than allowed 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x75eccfc?, {0x801de88, 0xc002ff8340}, 0xc004734500, 0x0) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBService(...) test/e2e/network/service.go:3966 k8s.io/kubernetes/test/e2e/network.glob..func19.8() test/e2e/network/loadbalancer.go:776 +0xf0 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 15:13:40.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 15:13:40.968: INFO: Output of kubectl describe svc: Nov 25 15:13:40.968: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6287 describe svc --namespace=loadbalancers-6287' Nov 25 15:13:41.536: INFO: stderr: "" Nov 25 15:13:41.536: INFO: stdout: "Name: affinity-lb-esipp\nNamespace: loadbalancers-6287\nLabels: <none>\nAnnotations: <none>\nSelector: name=affinity-lb-esipp\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.226.235\nIPs: 10.0.226.235\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nNodePort: <unset> 32141/TCP\nEndpoints: 10.64.0.157:9376,10.64.2.157:9376,10.64.3.170:9376\nSession Affinity: ClientIP\nExternal Traffic Policy: Local\nHealthCheck NodePort: 30035\nEvents: <none>\n" Nov 25 15:13:41.536: INFO: Name: affinity-lb-esipp Namespace: loadbalancers-6287 Labels: <none> Annotations: <none> Selector: name=affinity-lb-esipp Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.226.235 IPs: 10.0.226.235 Port: <unset> 80/TCP TargetPort: 9376/TCP NodePort: <unset> 32141/TCP Endpoints: 10.64.0.157:9376,10.64.2.157:9376,10.64.3.170:9376 Session Affinity: ClientIP External Traffic Policy: Local HealthCheck NodePort: 30035 Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:13:41.536 STEP: Collecting events from namespace "loadbalancers-6287". 11/25/22 15:13:41.536 STEP: Found 19 events. 11/25/22 15:13:41.601 Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:31 +0000 UTC - event for affinity-lb-esipp: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-tn24v Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:31 +0000 UTC - event for affinity-lb-esipp: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-4tsdd Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:31 +0000 UTC - event for affinity-lb-esipp: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-5mvmm Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:31 +0000 UTC - event for affinity-lb-esipp-4tsdd: {default-scheduler } Scheduled: Successfully assigned loadbalancers-6287/affinity-lb-esipp-4tsdd to bootstrap-e2e-minion-group-xfgk Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:31 +0000 UTC - event for affinity-lb-esipp-5mvmm: {default-scheduler } Scheduled: Successfully assigned loadbalancers-6287/affinity-lb-esipp-5mvmm to bootstrap-e2e-minion-group-nfrc Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:31 +0000 UTC - event for affinity-lb-esipp-tn24v: {default-scheduler } Scheduled: Successfully assigned loadbalancers-6287/affinity-lb-esipp-tn24v to bootstrap-e2e-minion-group-cs2j Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:32 +0000 UTC - event for affinity-lb-esipp-tn24v: {kubelet bootstrap-e2e-minion-group-cs2j} Created: Created container affinity-lb-esipp Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:32 +0000 UTC - event for affinity-lb-esipp-tn24v: {kubelet bootstrap-e2e-minion-group-cs2j} Started: Started container affinity-lb-esipp Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:32 +0000 UTC - event for affinity-lb-esipp-tn24v: {kubelet bootstrap-e2e-minion-group-cs2j} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:33 +0000 UTC - event for affinity-lb-esipp-4tsdd: {kubelet bootstrap-e2e-minion-group-xfgk} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:33 +0000 UTC - event for affinity-lb-esipp-4tsdd: {kubelet bootstrap-e2e-minion-group-xfgk} Created: Created container affinity-lb-esipp Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:33 +0000 UTC - event for affinity-lb-esipp-4tsdd: {kubelet bootstrap-e2e-minion-group-xfgk} Started: Started container affinity-lb-esipp Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:33 +0000 UTC - event for affinity-lb-esipp-5mvmm: {kubelet bootstrap-e2e-minion-group-nfrc} Created: Created container affinity-lb-esipp Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:33 +0000 UTC - event for affinity-lb-esipp-5mvmm: {kubelet bootstrap-e2e-minion-group-nfrc} Started: Started container affinity-lb-esipp Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:33 +0000 UTC - event for affinity-lb-esipp-5mvmm: {kubelet bootstrap-e2e-minion-group-nfrc} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:33 +0000 UTC - event for affinity-lb-esipp-tn24v: {kubelet bootstrap-e2e-minion-group-cs2j} Killing: Stopping container affinity-lb-esipp Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:34 +0000 UTC - event for affinity-lb-esipp-4tsdd: {kubelet bootstrap-e2e-minion-group-xfgk} Killing: Stopping container affinity-lb-esipp Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:36 +0000 UTC - event for affinity-lb-esipp-tn24v: {kubelet bootstrap-e2e-minion-group-cs2j} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 15:13:41.601: INFO: At 2022-11-25 15:13:38 +0000 UTC - event for affinity-lb-esipp-4tsdd: {kubelet bootstrap-e2e-minion-group-xfgk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 15:13:41.668: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 15:13:41.668: INFO: affinity-lb-esipp-4tsdd bootstrap-e2e-minion-group-xfgk Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:31 +0000 UTC }] Nov 25 15:13:41.668: INFO: affinity-lb-esipp-5mvmm bootstrap-e2e-minion-group-nfrc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:31 +0000 UTC }] Nov 25 15:13:41.668: INFO: affinity-lb-esipp-tn24v bootstrap-e2e-minion-group-cs2j Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:13:31 +0000 UTC }] Nov 25 15:13:41.668: INFO: Nov 25 15:13:41.983: INFO: Unable to fetch loadbalancers-6287/affinity-lb-esipp-4tsdd/affinity-lb-esipp logs: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-esipp-4tsdd) Nov 25 15:13:42.106: INFO: Unable to fetch loadbalancers-6287/affinity-lb-esipp-5mvmm/affinity-lb-esipp logs: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-esipp-5mvmm) Nov 25 15:13:42.208: INFO: Unable to fetch loadbalancers-6287/affinity-lb-esipp-tn24v/affinity-lb-esipp logs: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-esipp-tn24v) Nov 25 15:13:42.280: INFO: Logging node info for node bootstrap-e2e-master Nov 25 15:13:42.407: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 57fbafcc-fd48-4c2a-b8af-d2f45e071824 6393 0 2022-11-25 14:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 14:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 14:55:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 14:55:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 15:11:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:50 +0000 UTC,LastTransitionTime:2022-11-25 14:55:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:11:09 +0000 UTC,LastTransitionTime:2022-11-25 14:55:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:11:09 +0000 UTC,LastTransitionTime:2022-11-25 14:55:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:11:09 +0000 UTC,LastTransitionTime:2022-11-25 14:55:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:11:09 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.189.151,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a27504a9a8de9326ab25236db517b6d4,SystemUUID:a27504a9-a8de-9326-ab25-236db517b6d4,BootID:fd4b6e0f-8d3b-43d1-8d87-0b5f34de48b4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 15:13:42.408: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 15:13:42.522: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 15:13:42.636: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 15:13:42.636: INFO: Logging node info for node bootstrap-e2e-minion-group-cs2j Nov 25 15:13:42.854: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-cs2j 709b4477-dd95-4ae0-b576-f41790f3abc7 8620 0 2022-11-25 14:55:30 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-cs2j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-cs2j topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4829":"bootstrap-e2e-minion-group-cs2j","csi-hostpath-provisioning-7246":"bootstrap-e2e-minion-group-cs2j","csi-hostpath-provisioning-9023":"bootstrap-e2e-minion-group-cs2j","csi-hostpath-provisioning-9182":"bootstrap-e2e-minion-group-cs2j","csi-mock-csi-mock-volumes-7581":"csi-mock-csi-mock-volumes-7581"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 15:10:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 15:12:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 15:13:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-cs2j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 15:10:33 +0000 UTC,LastTransitionTime:2022-11-25 14:55:31 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:37 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:13:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:13:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:13:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:13:32 +0000 UTC,LastTransitionTime:2022-11-25 14:55:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.154.188,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-cs2j.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-cs2j.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:009dcaae494ddb3388c5512015911a5e,SystemUUID:009dcaae-494d-db33-88c5-512015911a5e,BootID:0ab614df-9d04-456f-9e89-54d5c6a29e6a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9023^0ea6cf48-6cd2-11ed-b9eb-96728155b2c0],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9023^0ea6cf48-6cd2-11ed-b9eb-96728155b2c0,DevicePath:,},},Config:nil,},} Nov 25 15:13:42.854: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-cs2j Nov 25 15:13:42.980: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-cs2j Nov 25 15:13:43.166: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-cs2j: error trying to reach service: No agent available Nov 25 15:13:43.166: INFO: Logging node info for node bootstrap-e2e-minion-group-nfrc Nov 25 15:13:43.282: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-nfrc 32e3ddf0-9230-4008-a6d2-35385dd6942e 8753 0 2022-11-25 14:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-nfrc kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-nfrc topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4049":"bootstrap-e2e-minion-group-nfrc","csi-hostpath-multivolume-7132":"bootstrap-e2e-minion-group-nfrc","csi-hostpath-volumeio-5289":"bootstrap-e2e-minion-group-nfrc","csi-mock-csi-mock-volumes-6515":"csi-mock-csi-mock-volumes-6515"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:06:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 15:10:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 15:13:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-nfrc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:50 +0000 UTC,LastTransitionTime:2022-11-25 14:55:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:13:42 +0000 UTC,LastTransitionTime:2022-11-25 14:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:13:42 +0000 UTC,LastTransitionTime:2022-11-25 14:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:13:42 +0000 UTC,LastTransitionTime:2022-11-25 14:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:13:42 +0000 UTC,LastTransitionTime:2022-11-25 14:55:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.169.41,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-nfrc.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-nfrc.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:584471f9c540880f2541598af76fd197,SystemUUID:584471f9-c540-880f-2541-598af76fd197,BootID:925b3820-ba2a-4f24-949e-2611ee406076,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-8209^ad4cfbc5-6cd1-11ed-9cc2-ea835e3ab61a kubernetes.io/csi/csi-hostpath-multivolume-8209^ae9f3a1c-6cd1-11ed-9cc2-ea835e3ab61a],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8209^ad4cfbc5-6cd1-11ed-9cc2-ea835e3ab61a,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8209^ae9f3a1c-6cd1-11ed-9cc2-ea835e3ab61a,DevicePath:,},},Config:nil,},} Nov 25 15:13:43.282: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-nfrc Nov 25 15:13:43.450: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-nfrc Nov 25 15:13:43.643: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-nfrc: error trying to reach service: No agent available Nov 25 15:13:43.643: INFO: Logging node info for node bootstrap-e2e-minion-group-xfgk Nov 25 15:13:43.755: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-xfgk ba54c0d2-29af-426e-a049-7278d60a9490 8743 0 2022-11-25 14:55:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-xfgk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-xfgk topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5560":"bootstrap-e2e-minion-group-xfgk","csi-hostpath-provisioning-1135":"bootstrap-e2e-minion-group-xfgk","csi-mock-csi-mock-volumes-2741":"bootstrap-e2e-minion-group-xfgk","csi-mock-csi-mock-volumes-325":"bootstrap-e2e-minion-group-xfgk","csi-mock-csi-mock-volumes-9804":"csi-mock-csi-mock-volumes-9804"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-25 14:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2022-11-25 14:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:06:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 15:10:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 15:13:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-10/us-west1-b/bootstrap-e2e-minion-group-xfgk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 15:10:41 +0000 UTC,LastTransitionTime:2022-11-25 14:55:37 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 14:55:50 +0000 UTC,LastTransitionTime:2022-11-25 14:55:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:13:42 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:13:42 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:13:42 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:13:42 +0000 UTC,LastTransitionTime:2022-11-25 14:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.196.45,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-xfgk.c.k8s-boskos-gce-project-10.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-xfgk.c.k8s-boskos-gce-project-10.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:35564864f08206045e292b7e32d4bbba,SystemUUID:35564864-f082-0604-5e29-2b7e32d4bbba,BootID:303b460c-3762-4624-8d44-d7a3124b5e6c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-5560^a7b41a64-6cd1-11ed-90f7-ee2d44c6e29f],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-5560^a7b41a64-6cd1-11ed-90f7-ee2d44c6e29f,DevicePath:,},},Config:nil,},} Nov 25 15:13:43.756: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-xfgk Nov 25 15:13:43.837: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-xfgk Nov 25 15:13:43.948: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-xfgk: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-6287" for this suite. 11/25/22 15:13:43.948
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sonly\sallow\saccess\sfrom\sservice\sloadbalancer\ssource\sranges\s\[Slow\]$'
test/e2e/framework/pod/resource.go:471 k8s.io/kubernetes/test/e2e/framework/pod.CreateExecPodOrFail({0x801de88, 0xc002baa820}, {0xc0032d8030, 0x12}, {0x75db904, 0xe}, 0x0) test/e2e/framework/pod/resource.go:471 +0x31c k8s.io/kubernetes/test/e2e/network.glob..func19.5() test/e2e/network/loadbalancer.go:500 +0x176 There were additional failures detected after the initial failure: [FAILED] Nov 25 14:59:57.009: failed to list events in namespace "loadbalancers-1881": Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-1881/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 14:59:57.050: Couldn't delete ns: "loadbalancers-1881": Delete "https://34.82.189.151/api/v1/namespaces/loadbalancers-1881": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/loadbalancers-1881", Err:(*net.OpError)(0xc002858ff0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 14:59:49.42 Nov 25 14:59:49.420: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 14:59:49.422 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 14:59:49.747 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 14:59:49.914 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should only allow access from service loadbalancer source ranges [Slow] test/e2e/network/loadbalancer.go:487 STEP: Prepare allow source ips 11/25/22 14:59:50.445 Nov 25 14:59:50.445: INFO: Creating new exec pod Nov 25 14:59:50.583: INFO: Waiting up to 5m0s for pod "execpod-acceptfgmkh" in namespace "loadbalancers-1881" to be "running" Nov 25 14:59:50.658: INFO: Pod "execpod-acceptfgmkh": Phase="Pending", Reason="", readiness=false. Elapsed: 75.243234ms Nov 25 14:59:52.721: INFO: Pod "execpod-acceptfgmkh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13801548s Nov 25 14:59:54.727: INFO: Pod "execpod-acceptfgmkh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144251201s Nov 25 14:59:56.699: INFO: Encountered non-retryable error while getting pod loadbalancers-1881/execpod-acceptfgmkh: Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-1881/pods/execpod-acceptfgmkh": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 14:59:56.699: INFO: Unexpected error occurred: error while waiting for pod loadbalancers-1881/execpod-acceptfgmkh to be running: Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-1881/pods/execpod-acceptfgmkh": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 14:59:56.699: FAIL: failed to create new exec pod in namespace: loadbalancers-1881 Unexpected error: <*fmt.wrapError | 0xc003504080>: { msg: "error while waiting for pod loadbalancers-1881/execpod-acceptfgmkh to be running: Get \"https://34.82.189.151/api/v1/namespaces/loadbalancers-1881/pods/execpod-acceptfgmkh\": dial tcp 34.82.189.151:443: connect: connection refused", err: <*url.Error | 0xc0032fb800>{ Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/loadbalancers-1881/pods/execpod-acceptfgmkh", Err: <*net.OpError | 0xc003258d20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0030ad320>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003504040>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } error while waiting for pod loadbalancers-1881/execpod-acceptfgmkh to be running: Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-1881/pods/execpod-acceptfgmkh": dial tcp 34.82.189.151:443: connect: connection refused occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.CreateExecPodOrFail({0x801de88, 0xc002baa820}, {0xc0032d8030, 0x12}, {0x75db904, 0xe}, 0x0) test/e2e/framework/pod/resource.go:471 +0x31c k8s.io/kubernetes/test/e2e/network.glob..func19.5() test/e2e/network/loadbalancer.go:500 +0x176 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 14:59:56.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 14:59:56.739: INFO: Output of kubectl describe svc: Nov 25 14:59:56.739: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.189.151 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-1881 describe svc --namespace=loadbalancers-1881' Nov 25 14:59:56.967: INFO: rc: 1 Nov 25 14:59:56.967: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 14:59:56.968 STEP: Collecting events from namespace "loadbalancers-1881". 11/25/22 14:59:56.968 Nov 25 14:59:57.009: INFO: Unexpected error: failed to list events in namespace "loadbalancers-1881": <*url.Error | 0xc002cd47b0>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/loadbalancers-1881/events", Err: <*net.OpError | 0xc002777bd0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0030ad9e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001733de0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 14:59:57.009: FAIL: failed to list events in namespace "loadbalancers-1881": Get "https://34.82.189.151/api/v1/namespaces/loadbalancers-1881/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc000a3e5c0, {0xc0032d8030, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002baa820}, {0xc0032d8030, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc000a3e650?, {0xc0032d8030?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0012a44b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0013705e0?, 0x13?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0013705e0?, 0x7fadfa0?}, {0xae73300?, 0xc001bddf80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-1881" for this suite. 11/25/22 14:59:57.01 Nov 25 14:59:57.050: FAIL: Couldn't delete ns: "loadbalancers-1881": Delete "https://34.82.189.151/api/v1/namespaces/loadbalancers-1881": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/loadbalancers-1881", Err:(*net.OpError)(0xc002858ff0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0012a44b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0013704b0?, 0x13?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0013704b0?, 0x0?}, {0xae73300?, 0x30000c001be1f90?, 0x3a212e4?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\supdate\snodePort\:\shttp\s\[Slow\]$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0033d4000, {0x75c6f7c, 0x9}, 0xc002e7af60) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0033d4000, 0x7fa67c264858?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0033d4000, 0x3e?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0012965a0, {0xc003e9af20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.13() test/e2e/network/networking.go:364 +0x51 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:01:55.633: failed to list events in namespace "nettest-4225": Get "https://34.82.189.151/api/v1/namespaces/nettest-4225/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:01:55.674: Couldn't delete ns: "nettest-4225": Delete "https://34.82.189.151/api/v1/namespaces/nettest-4225": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/nettest-4225", Err:(*net.OpError)(0xc0033a38b0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] Networking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:00:04.046 Nov 25 15:00:04.046: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename nettest 11/25/22 15:00:04.048 Nov 25 15:00:04.088: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:06.129: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:08.128: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:10.128: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:12.129: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:14.129: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:16.129: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:18.129: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:20.129: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:22.129: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:24.128: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:00:26.128: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:01:28.575 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:01:28.666 [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 [It] should update nodePort: http [Slow] test/e2e/network/networking.go:363 STEP: Performing setup for networking test in namespace nettest-4225 11/25/22 15:01:28.765 STEP: creating a selector 11/25/22 15:01:28.765 STEP: Creating the service pods in kubernetes 11/25/22 15:01:28.765 Nov 25 15:01:28.766: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 15:01:29.397: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "nettest-4225" to be "running and ready" Nov 25 15:01:29.513: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 115.59845ms Nov 25 15:01:29.513: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:01:31.573: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176478711s Nov 25 15:01:31.573: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:01:33.591: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193860338s Nov 25 15:01:33.591: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:01:35.638: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.24145724s Nov 25 15:01:35.638: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:01:37.701: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.303989862s Nov 25 15:01:37.701: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:01:39.592: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.194557013s Nov 25 15:01:39.592: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:01:41.600: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.202891929s Nov 25 15:01:41.600: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:01:43.605: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.20839816s Nov 25 15:01:43.605: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:01:45.625: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.228146518s Nov 25 15:01:45.625: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:01:47.611: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.213782311s Nov 25 15:01:47.611: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:01:49.586: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.18886874s Nov 25 15:01:49.586: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:01:51.638: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.240686742s Nov 25 15:01:51.638: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:01:53.579: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.181879579s Nov 25 15:01:53.579: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:01:55.553: INFO: Encountered non-retryable error while getting pod nettest-4225/netserver-0: Get "https://34.82.189.151/api/v1/namespaces/nettest-4225/pods/netserver-0": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:01:55.554: INFO: Unexpected error: <*fmt.wrapError | 0xc001647b00>: { msg: "error while waiting for pod nettest-4225/netserver-0 to be running and ready: Get \"https://34.82.189.151/api/v1/namespaces/nettest-4225/pods/netserver-0\": dial tcp 34.82.189.151:443: connect: connection refused", err: <*url.Error | 0xc000bec750>{ Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/nettest-4225/pods/netserver-0", Err: <*net.OpError | 0xc00346ca00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0019463f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001647ac0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 25 15:01:55.554: FAIL: error while waiting for pod nettest-4225/netserver-0 to be running and ready: Get "https://34.82.189.151/api/v1/namespaces/nettest-4225/pods/netserver-0": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0033d4000, {0x75c6f7c, 0x9}, 0xc002e7af60) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0033d4000, 0x7fa67c264858?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0033d4000, 0x3e?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0012965a0, {0xc003e9af20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.13() test/e2e/network/networking.go:364 +0x51 [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 Nov 25 15:01:55.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:01:55.594 STEP: Collecting events from namespace "nettest-4225". 11/25/22 15:01:55.594 Nov 25 15:01:55.633: INFO: Unexpected error: failed to list events in namespace "nettest-4225": <*url.Error | 0xc0010c68a0>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/nettest-4225/events", Err: <*net.OpError | 0xc00346cbe0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0019469f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001647e80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:01:55.633: FAIL: failed to list events in namespace "nettest-4225": Get "https://34.82.189.151/api/v1/namespaces/nettest-4225/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc000ec25c0, {0xc0048cfa90, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002501a00}, {0xc0048cfa90, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc000ec2650?, {0xc0048cfa90?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0012965a0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001587740?, 0xc003d3efb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc002501dc8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001587740?, 0x29449fc?}, {0xae73300?, 0xc003d3ef80?, 0x2d5dcbd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 STEP: Destroying namespace "nettest-4225" for this suite. 11/25/22 15:01:55.634 Nov 25 15:01:55.674: FAIL: Couldn't delete ns: "nettest-4225": Delete "https://34.82.189.151/api/v1/namespaces/nettest-4225": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/nettest-4225", Err:(*net.OpError)(0xc0033a38b0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0012965a0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001587320?, 0xc002004fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001587320?, 0x0?}, {0xae73300?, 0x5?, 0xc002d65b90?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\supdate\snodePort\:\sudp\s\[Slow\]$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0001fc0e0, {0x75c6f7c, 0x9}, 0xc003911a40) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0001fc0e0, 0x7f632403d798?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0001fc0e0, 0x3e?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0012b22d0, {0xc003dcbf20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.15() test/e2e/network/networking.go:395 +0x51 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:07:56.064: failed to list events in namespace "nettest-6338": Get "https://34.82.189.151/api/v1/namespaces/nettest-6338/events": dial tcp 34.82.189.151:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:07:56.104: Couldn't delete ns: "nettest-6338": Delete "https://34.82.189.151/api/v1/namespaces/nettest-6338": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/nettest-6338", Err:(*net.OpError)(0xc0030b3590)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] Networking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:07:06.935 Nov 25 15:07:06.935: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename nettest 11/25/22 15:07:06.937 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:07:07.203 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:07:07.311 [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 [It] should update nodePort: udp [Slow] test/e2e/network/networking.go:394 STEP: Performing setup for networking test in namespace nettest-6338 11/25/22 15:07:07.462 STEP: creating a selector 11/25/22 15:07:07.462 STEP: Creating the service pods in kubernetes 11/25/22 15:07:07.462 Nov 25 15:07:07.462: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 15:07:07.870: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "nettest-6338" to be "running and ready" Nov 25 15:07:07.942: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 72.097686ms Nov 25 15:07:07.942: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:10.022: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151483169s Nov 25 15:07:10.022: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:11.991: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121143425s Nov 25 15:07:11.991: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:14.008: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138024631s Nov 25 15:07:14.008: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:16.014: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143460481s Nov 25 15:07:16.014: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:18.007: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.137156122s Nov 25 15:07:18.007: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:20.058: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.188035959s Nov 25 15:07:20.058: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:21.994: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.123916293s Nov 25 15:07:21.994: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:24.032: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.162121511s Nov 25 15:07:24.032: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:26.038: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.167807093s Nov 25 15:07:26.038: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:28.025: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.155247731s Nov 25 15:07:28.025: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:30.042: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.172384919s Nov 25 15:07:30.042: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:32.005: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.135200829s Nov 25 15:07:32.005: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:34.016: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.145793792s Nov 25 15:07:34.016: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:36.007: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.137325474s Nov 25 15:07:36.007: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:38.090: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.220237581s Nov 25 15:07:38.090: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:40.011: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.14143289s Nov 25 15:07:40.012: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:42.004: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.134376425s Nov 25 15:07:42.004: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:44.014: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.143793541s Nov 25 15:07:44.014: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:46.008: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 38.13824312s Nov 25 15:07:46.008: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:48.020: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 40.150160558s Nov 25 15:07:48.020: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:50.002: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 42.132300144s Nov 25 15:07:50.002: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:51.999: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 44.128668362s Nov 25 15:07:51.999: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:54.004: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 46.133736587s Nov 25 15:07:54.004: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:07:55.983: INFO: Encountered non-retryable error while getting pod nettest-6338/netserver-0: Get "https://34.82.189.151/api/v1/namespaces/nettest-6338/pods/netserver-0": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:07:55.983: INFO: Unexpected error: <*fmt.wrapError | 0xc00300d680>: { msg: "error while waiting for pod nettest-6338/netserver-0 to be running and ready: Get \"https://34.82.189.151/api/v1/namespaces/nettest-6338/pods/netserver-0\": dial tcp 34.82.189.151:443: connect: connection refused", err: <*url.Error | 0xc004f71200>{ Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/nettest-6338/pods/netserver-0", Err: <*net.OpError | 0xc0030132c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0030d7bf0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00300d640>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 25 15:07:55.983: FAIL: error while waiting for pod nettest-6338/netserver-0 to be running and ready: Get "https://34.82.189.151/api/v1/namespaces/nettest-6338/pods/netserver-0": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0001fc0e0, {0x75c6f7c, 0x9}, 0xc003911a40) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0001fc0e0, 0x7f632403d798?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0001fc0e0, 0x3e?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0012b22d0, {0xc003dcbf20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.15() test/e2e/network/networking.go:395 +0x51 [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 Nov 25 15:07:55.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:07:56.023 STEP: Collecting events from namespace "nettest-6338". 11/25/22 15:07:56.023 Nov 25 15:07:56.064: INFO: Unexpected error: failed to list events in namespace "nettest-6338": <*url.Error | 0xc001ddbc80>: { Op: "Get", URL: "https://34.82.189.151/api/v1/namespaces/nettest-6338/events", Err: <*net.OpError | 0xc0022e9a40>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003108150>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 82, 189, 151], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0000db2a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:07:56.064: FAIL: failed to list events in namespace "nettest-6338": Get "https://34.82.189.151/api/v1/namespaces/nettest-6338/events": dial tcp 34.82.189.151:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0023a85c0, {0xc0010f07c0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002724ea0}, {0xc0010f07c0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0023a8650?, {0xc0010f07c0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0012b22d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc003a96630?, 0xc00050dfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00222b8e8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc003a96630?, 0x29449fc?}, {0xae73300?, 0xc00050df80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 STEP: Destroying namespace "nettest-6338" for this suite. 11/25/22 15:07:56.064 Nov 25 15:07:56.104: FAIL: Couldn't delete ns: "nettest-6338": Delete "https://34.82.189.151/api/v1/namespaces/nettest-6338": dial tcp 34.82.189.151:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.82.189.151/api/v1/namespaces/nettest-6338", Err:(*net.OpError)(0xc0030b3590)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0012b22d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc003a965b0?, 0x60d31f1?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x268c80e?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc003a965b0?, 0x0?}, {0xae73300?, 0xc0012b22d0?, 0x6627cc0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sServices\sGCE\s\[Slow\]\sshould\sbe\sable\sto\screate\sand\stear\sdown\sa\sstandard\-tier\sload\sbalancer\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000e680f0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func21.2() test/e2e/network/network_tiers.go:57 +0x133
[BeforeEach] [sig-network] Services GCE [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:14:05.991 Nov 25 15:14:05.991: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename services 11/25/22 15:14:05.993 Nov 25 15:14:06.032: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:08.073: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:10.072: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:12.072: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:14.072: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:16.073: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:18.073: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:20.073: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:22.073: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:24.072: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:26.073: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:28.072: INFO: Unexpected error while creating namespace: Post "https://34.82.189.151/api/v1/namespaces": dial tcp 34.82.189.151:443: connect: connection refused Nov 25 15:14:30.072: INFO: Unexpected error