go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\sjobs\swhen\ssuspended\s\[Slow\]\s\[Conformance\]$'
test/e2e/apps/cronjob.go:111 k8s.io/kubernetes/test/e2e/apps.glob..func2.2() test/e2e/apps/cronjob.go:111 +0x376 There were additional failures detected after the initial failure: [FAILED] Nov 26 01:04:00.144: failed to list events in namespace "cronjob-7443": Get "https://34.168.44.214/api/v1/namespaces/cronjob-7443/events": dial tcp 34.168.44.214:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 01:04:00.183: Couldn't delete ns: "cronjob-7443": Delete "https://34.168.44.214/api/v1/namespaces/cronjob-7443": dial tcp 34.168.44.214:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.168.44.214/api/v1/namespaces/cronjob-7443", Err:(*net.OpError)(0xc002629400)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:00:57.982 Nov 26 01:00:57.982: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/26/22 01:00:57.983 Nov 26 01:00:58.023: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:00.063: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:02.062: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:04.063: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:06.063: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:08.063: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:10.063: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:12.063: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:14.063: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:16.063: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:18.063: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:20.063: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:22.062: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:24.062: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:26.063: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 01:02:07.726 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 01:02:07.817 [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/apps/cronjob.go:96 STEP: Creating a suspended cronjob 11/26/22 01:02:07.933 STEP: Ensuring no jobs are scheduled 11/26/22 01:02:07.985 STEP: Ensuring no job exists by listing jobs explicitly 11/26/22 01:04:00.024 Nov 26 01:04:00.064: INFO: Unexpected error: Failed to list the CronJobs in namespace cronjob-7443: <*url.Error | 0xc002a1e210>: { Op: "Get", URL: "https://34.168.44.214/apis/batch/v1/namespaces/cronjob-7443/jobs", Err: <*net.OpError | 0xc0026291d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002a1e1e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 168, 44, 214], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0011fcc80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 01:04:00.064: FAIL: Failed to list the CronJobs in namespace cronjob-7443: Get "https://34.168.44.214/apis/batch/v1/namespaces/cronjob-7443/jobs": dial tcp 34.168.44.214:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func2.2() test/e2e/apps/cronjob.go:111 +0x376 [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 26 01:04:00.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:04:00.104 STEP: Collecting events from namespace "cronjob-7443". 11/26/22 01:04:00.104 Nov 26 01:04:00.143: INFO: Unexpected error: failed to list events in namespace "cronjob-7443": <*url.Error | 0xc0020ae270>: { Op: "Get", URL: "https://34.168.44.214/api/v1/namespaces/cronjob-7443/events", Err: <*net.OpError | 0xc00242f8b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0019a0cf0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 168, 44, 214], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000ed8940>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 01:04:00.144: FAIL: failed to list events in namespace "cronjob-7443": Get "https://34.168.44.214/api/v1/namespaces/cronjob-7443/events": dial tcp 34.168.44.214:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0010ea5c0, {0xc0038e2a00, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc001883d40}, {0xc0038e2a00, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0010ea650?, {0xc0038e2a00?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000fff860) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001960c40?, 0xc004473fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0018828a8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001960c40?, 0x29449fc?}, {0xae73300?, 0xc004473f80?, 0x2fdb5c0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-7443" for this suite. 11/26/22 01:04:00.144 Nov 26 01:04:00.183: FAIL: Couldn't delete ns: "cronjob-7443": Delete "https://34.168.44.214/api/v1/namespaces/cronjob-7443": dial tcp 34.168.44.214:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.168.44.214/api/v1/namespaces/cronjob-7443", Err:(*net.OpError)(0xc002629400)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000fff860) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001960b90?, 0xc0007edfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001960b90?, 0x0?}, {0xae73300?, 0x5?, 0xc001126c00?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\snew\sjobs\swhen\sForbidConcurrent\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000fcd860) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:18:29.536 Nov 26 01:18:29.536: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/26/22 01:18:29.538 Nov 26 01:18:29.577: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:31.617: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:33.618: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:35.617: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:37.617: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:39.617: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:20:44.453: INFO: Unexpected error: <*fmt.wrapError | 0xc0015200a0>: { msg: "wait for service account \"default\" in namespace \"cronjob-9815\": timed out waiting for the condition", err: <*errors.errorString | 0xc000115d70>{ s: "timed out waiting for the condition", }, } Nov 26 01:20:44.453: FAIL: wait for service account "default" in namespace "cronjob-9815": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000fcd860) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 26 01:20:44.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:20:44.535 STEP: Collecting events from namespace "cronjob-9815". 11/26/22 01:20:44.535 STEP: Found 0 events. 11/26/22 01:20:44.588 Nov 26 01:20:44.646: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 01:20:44.646: INFO: Nov 26 01:20:44.693: INFO: Logging node info for node bootstrap-e2e-master Nov 26 01:20:44.733: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master f052a6f7-0c51-4660-967d-6ec4c5208a42 12717 0 2022-11-26 00:56:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 01:17:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:17:28 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:17:28 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:17:28 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:17:28 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.44.214,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:df6bcb3c-a5ed-497f-83f2-74f13e952c28,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 01:20:44.734: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 01:20:44.817: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 01:20:45.334: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:45.334: INFO: Container kube-apiserver ready: true, restart count 3 Nov 26 01:20:45.334: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:45.334: INFO: Container kube-controller-manager ready: false, restart count 6 Nov 26 01:20:45.334: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 00:56:12 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:45.334: INFO: Container kube-addon-manager ready: true, restart count 2 Nov 26 01:20:45.334: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 00:56:13 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:45.334: INFO: Container l7-lb-controller ready: false, restart count 7 Nov 26 01:20:45.334: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:45.334: INFO: Container kube-scheduler ready: true, restart count 4 Nov 26 01:20:45.334: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:45.334: INFO: Container etcd-container ready: true, restart count 5 Nov 26 01:20:45.334: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:45.334: INFO: Container etcd-container ready: true, restart count 3 Nov 26 01:20:45.334: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:45.334: INFO: Container konnectivity-server-container ready: true, restart count 1 Nov 26 01:20:45.334: INFO: metadata-proxy-v0.1-8h6mf started at 2022-11-26 00:56:42 +0000 UTC (0+2 container statuses recorded) Nov 26 01:20:45.334: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:20:45.334: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:20:45.755: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 01:20:45.755: INFO: Logging node info for node bootstrap-e2e-minion-group-0hjv Nov 26 01:20:45.835: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0hjv aba0e90f-9c40-4934-aeed-e719199f0cec 13108 0 2022-11-26 00:56:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0hjv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-0hjv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8152":"bootstrap-e2e-minion-group-0hjv","csi-hostpath-provisioning-5652":"bootstrap-e2e-minion-group-0hjv"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 01:16:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 01:16:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 01:20:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-0hjv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:16:13 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:16:13 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:16:13 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:16:13 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.247.74.12,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0hjv.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0hjv.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f702fe377ef6bb569afbb12e0158ab5,SystemUUID:7f702fe3-77ef-6bb5-69af-bb12e0158ab5,BootID:7bec61c0-e888-4acc-a61d-e6fb73a87068,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1487^b0f6a40b-6d25-11ed-9ba3-ceb25206bbbd],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1487^b0f6a40b-6d25-11ed-9ba3-ceb25206bbbd,DevicePath:,},},Config:nil,},} Nov 26 01:20:45.835: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0hjv Nov 26 01:20:45.937: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0hjv Nov 26 01:20:46.036: INFO: pod-d647abcb-295b-4ba3-bb3b-72f4c6f3de02 started at 2022-11-26 00:59:12 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:20:46.036: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-bkkbv started at 2022-11-26 01:03:25 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container agnhost-container ready: false, restart count 6 Nov 26 01:20:46.036: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:12:52 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.036: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 01:20:46.036: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 01:20:46.036: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 01:20:46.036: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 01:20:46.036: INFO: Container hostpath ready: true, restart count 2 Nov 26 01:20:46.036: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 01:20:46.036: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 01:20:46.036: INFO: pod-configmaps-a8d056c0-ff53-45cb-8c13-ec73b1032b04 started at 2022-11-26 01:00:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:20:46.036: INFO: volume-snapshot-controller-0 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container volume-snapshot-controller ready: false, restart count 7 Nov 26 01:20:46.036: INFO: pod-subpath-test-dynamicpv-2vf4 started at 2022-11-26 01:00:19 +0000 UTC (1+2 container statuses recorded) Nov 26 01:20:46.036: INFO: Init container init-volume-dynamicpv-2vf4 ready: true, restart count 1 Nov 26 01:20:46.036: INFO: Container test-container-subpath-dynamicpv-2vf4 ready: false, restart count 3 Nov 26 01:20:46.036: INFO: Container test-container-volume-dynamicpv-2vf4 ready: false, restart count 3 Nov 26 01:20:46.036: INFO: metadata-proxy-v0.1-8d7ds started at 2022-11-26 00:56:40 +0000 UTC (0+2 container statuses recorded) Nov 26 01:20:46.036: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:20:46.036: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:20:46.036: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-kpcm8 started at 2022-11-26 00:59:55 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container agnhost-container ready: false, restart count 5 Nov 26 01:20:46.036: INFO: netserver-0 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container webserver ready: false, restart count 6 Nov 26 01:20:46.036: INFO: pod-configmaps-cc7f33ac-2f26-44c6-ad1b-c8b91ecdfde7 started at 2022-11-26 01:02:24 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:20:46.036: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-ct8rx started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 01:20:46.036: INFO: netserver-0 started at 2022-11-26 01:14:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container webserver ready: false, restart count 3 Nov 26 01:20:46.036: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:15:34 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.036: INFO: Container csi-attacher ready: false, restart count 2 Nov 26 01:20:46.036: INFO: Container csi-provisioner ready: false, restart count 2 Nov 26 01:20:46.036: INFO: Container csi-resizer ready: false, restart count 2 Nov 26 01:20:46.036: INFO: Container csi-snapshotter ready: false, restart count 2 Nov 26 01:20:46.036: INFO: Container hostpath ready: false, restart count 2 Nov 26 01:20:46.036: INFO: Container liveness-probe ready: false, restart count 2 Nov 26 01:20:46.036: INFO: Container node-driver-registrar ready: false, restart count 2 Nov 26 01:20:46.036: INFO: pod-subpath-test-inlinevolume-v5md started at 2022-11-26 01:00:23 +0000 UTC (1+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Init container init-volume-inlinevolume-v5md ready: true, restart count 0 Nov 26 01:20:46.036: INFO: Container test-container-subpath-inlinevolume-v5md ready: false, restart count 0 Nov 26 01:20:46.036: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:12:23 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.036: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 01:20:46.036: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:20:46.036: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 01:20:46.036: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 01:20:46.036: INFO: Container hostpath ready: true, restart count 3 Nov 26 01:20:46.036: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 01:20:46.036: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 01:20:46.036: INFO: l7-default-backend-8549d69d99-x8spc started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 01:20:46.036: INFO: konnectivity-agent-4brl9 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container konnectivity-agent ready: false, restart count 7 Nov 26 01:20:46.036: INFO: netserver-0 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container webserver ready: true, restart count 4 Nov 26 01:20:46.036: INFO: httpd started at 2022-11-26 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container httpd ready: true, restart count 8 Nov 26 01:20:46.036: INFO: netserver-0 started at 2022-11-26 01:06:00 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container webserver ready: false, restart count 6 Nov 26 01:20:46.036: INFO: coredns-6d97d5ddb-ghpwb started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container coredns ready: false, restart count 8 Nov 26 01:20:46.036: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:48 +0000 UTC (0+4 container statuses recorded) Nov 26 01:20:46.036: INFO: Container busybox ready: false, restart count 6 Nov 26 01:20:46.036: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:20:46.036: INFO: Container driver-registrar ready: false, restart count 7 Nov 26 01:20:46.036: INFO: Container mock ready: false, restart count 7 Nov 26 01:20:46.036: INFO: ss-0 started at 2022-11-26 01:00:02 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container webserver ready: false, restart count 11 Nov 26 01:20:46.036: INFO: lb-sourcerange-n4k92 started at 2022-11-26 01:00:10 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container netexec ready: false, restart count 7 Nov 26 01:20:46.036: INFO: execpod-dropdkfjx started at 2022-11-26 01:00:02 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:20:46.036: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-5md2t started at 2022-11-26 01:03:01 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:20:46.036: INFO: kube-dns-autoscaler-5f6455f985-2brqn started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container autoscaler ready: false, restart count 8 Nov 26 01:20:46.036: INFO: execpod-acceptfj5ts started at 2022-11-26 00:59:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container agnhost-container ready: false, restart count 3 Nov 26 01:20:46.036: INFO: netserver-0 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container webserver ready: true, restart count 2 Nov 26 01:20:46.036: INFO: kube-proxy-bootstrap-e2e-minion-group-0hjv started at 2022-11-26 00:56:39 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.036: INFO: Container kube-proxy ready: false, restart count 8 Nov 26 01:20:46.847: INFO: Latency metrics for node bootstrap-e2e-minion-group-0hjv Nov 26 01:20:46.847: INFO: Logging node info for node bootstrap-e2e-minion-group-2982 Nov 26 01:20:46.889: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2982 23ac061c-c1e5-4314-9c38-31fd0e0866cb 13036 0 2022-11-26 00:56:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2982 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-2982 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-9512":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-2174":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-2301":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-8735":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-9114":"bootstrap-e2e-minion-group-2982","csi-hostpath-volumemode-9250":"bootstrap-e2e-minion-group-2982","csi-mock-csi-mock-volumes-8838":"csi-mock-csi-mock-volumes-8838","csi-mock-csi-mock-volumes-9268":"bootstrap-e2e-minion-group-2982"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 01:13:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 01:16:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 01:20:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-2982,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:19:16 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:19:16 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:19:16 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:19:16 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.83.251.2,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2982.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2982.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2696a1914e0c43baf9af45da97c22a96,SystemUUID:2696a191-4e0c-43ba-f9af-45da97c22a96,BootID:100bea17-3104-47ce-b900-733cee1dfe77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-7474^8eb7794d-6d25-11ed-9bf8-7ec81e6e10fe kubernetes.io/csi/csi-hostpath-provisioning-9114^19d7a84a-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19d993ab-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19dac6af-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19e2bbbd-6d26-11ed-82a4-7efb5be84aec],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19d7a84a-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19dac6af-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19d993ab-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7474^8eb7794d-6d25-11ed-9bf8-7ec81e6e10fe,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19e2bbbd-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},},Config:nil,},} Nov 26 01:20:46.889: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2982 Nov 26 01:20:46.932: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2982 Nov 26 01:20:47.046: INFO: pod-subpath-test-inlinevolume-wppj started at 2022-11-26 00:59:05 +0000 UTC (1+2 container statuses recorded) Nov 26 01:20:47.046: INFO: Init container init-volume-inlinevolume-wppj ready: true, restart count 0 Nov 26 01:20:47.046: INFO: Container test-container-subpath-inlinevolume-wppj ready: true, restart count 9 Nov 26 01:20:47.046: INFO: Container test-container-volume-inlinevolume-wppj ready: false, restart count 6 Nov 26 01:20:47.046: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:13:00 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:47.046: INFO: Container csi-attacher ready: false, restart count 3 Nov 26 01:20:47.046: INFO: Container csi-provisioner ready: false, restart count 3 Nov 26 01:20:47.046: INFO: Container csi-resizer ready: false, restart count 3 Nov 26 01:20:47.046: INFO: Container csi-snapshotter ready: false, restart count 3 Nov 26 01:20:47.046: INFO: Container hostpath ready: false, restart count 3 Nov 26 01:20:47.046: INFO: Container liveness-probe ready: false, restart count 3 Nov 26 01:20:47.046: INFO: Container node-driver-registrar ready: false, restart count 3 Nov 26 01:20:47.046: INFO: external-local-nodeport-hpnxr started at 2022-11-26 01:00:15 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container netexec ready: true, restart count 5 Nov 26 01:20:47.046: INFO: hostpath-3-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container hostpath-3-client ready: true, restart count 3 Nov 26 01:20:47.046: INFO: netserver-1 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container webserver ready: true, restart count 0 Nov 26 01:20:47.046: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:47.046: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container hostpath ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 01:20:47.046: INFO: back-off-cap started at 2022-11-26 01:08:51 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container back-off-cap ready: false, restart count 7 Nov 26 01:20:47.046: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:07 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:47.046: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 01:20:47.046: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:20:47.046: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 01:20:47.046: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 01:20:47.046: INFO: Container hostpath ready: true, restart count 5 Nov 26 01:20:47.046: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 01:20:47.046: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 01:20:47.046: INFO: kube-proxy-bootstrap-e2e-minion-group-2982 started at 2022-11-26 00:56:38 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container kube-proxy ready: false, restart count 7 Nov 26 01:20:47.046: INFO: pod-configmaps-0039d476-e3ec-4d1f-95a0-589475853cfc started at 2022-11-26 01:02:20 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:20:47.046: INFO: hostexec-bootstrap-e2e-minion-group-2982-262gq started at 2022-11-26 00:59:26 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:20:47.046: INFO: hostexec-bootstrap-e2e-minion-group-2982-xrccm started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 01:20:47.046: INFO: pod-bed0f594-e6f2-4d1d-b243-e6b3a7adfbf2 started at 2022-11-26 01:03:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:20:47.046: INFO: var-expansion-8d1d368e-67cd-4a67-b256-8d870f10a0e2 started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container dapi-container ready: false, restart count 0 Nov 26 01:20:47.046: INFO: hostexec-bootstrap-e2e-minion-group-2982-fm6cq started at 2022-11-26 01:03:21 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:20:47.046: INFO: metadata-proxy-v0.1-2rxjj started at 2022-11-26 00:56:38 +0000 UTC (0+2 container statuses recorded) Nov 26 01:20:47.046: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:20:47.046: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:20:47.046: INFO: external-local-update-rfn9p started at 2022-11-26 01:03:24 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container netexec ready: true, restart count 1 Nov 26 01:20:47.046: INFO: netserver-1 started at 2022-11-26 01:14:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container webserver ready: true, restart count 5 Nov 26 01:20:47.046: INFO: test-hostpath-type-9bw9n started at 2022-11-26 01:16:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 01:20:47.046: INFO: pod-subpath-test-preprovisionedpv-mkpm started at 2022-11-26 01:02:54 +0000 UTC (1+2 container statuses recorded) Nov 26 01:20:47.046: INFO: Init container init-volume-preprovisionedpv-mkpm ready: true, restart count 2 Nov 26 01:20:47.046: INFO: Container test-container-subpath-preprovisionedpv-mkpm ready: false, restart count 6 Nov 26 01:20:47.046: INFO: Container test-container-volume-preprovisionedpv-mkpm ready: true, restart count 6 Nov 26 01:20:47.046: INFO: hostpath-1-client started at 2022-11-26 01:03:13 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container hostpath-1-client ready: true, restart count 2 Nov 26 01:20:47.046: INFO: test-hostpath-type-lgxhw started at 2022-11-26 01:16:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container host-path-sh-testing ready: false, restart count 0 Nov 26 01:20:47.046: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:21 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:47.046: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container hostpath ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 01:20:47.046: INFO: pod-5be3eec2-e823-4f42-901c-fd502ef8f0d6 started at 2022-11-26 00:59:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:20:47.046: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:13:00 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:47.046: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 01:20:47.046: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 01:20:47.046: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 01:20:47.046: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 01:20:47.046: INFO: Container hostpath ready: true, restart count 2 Nov 26 01:20:47.046: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 01:20:47.046: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 01:20:47.046: INFO: hostpath-2-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container hostpath-2-client ready: true, restart count 2 Nov 26 01:20:47.046: INFO: netserver-1 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container webserver ready: true, restart count 6 Nov 26 01:20:47.046: INFO: hostpath-0-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container hostpath-0-client ready: true, restart count 4 Nov 26 01:20:47.046: INFO: hostexec-bootstrap-e2e-minion-group-2982-n9wzs started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 01:20:47.046: INFO: pod-subpath-test-inlinevolume-7tmj started at 2022-11-26 01:03:45 +0000 UTC (1+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Init container init-volume-inlinevolume-7tmj ready: true, restart count 0 Nov 26 01:20:47.046: INFO: Container test-container-subpath-inlinevolume-7tmj ready: false, restart count 0 Nov 26 01:20:47.046: INFO: lb-internal-8mn52 started at 2022-11-26 01:06:44 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container netexec ready: false, restart count 6 Nov 26 01:20:47.046: INFO: csi-mockplugin-0 started at 2022-11-26 01:13:39 +0000 UTC (0+4 container statuses recorded) Nov 26 01:20:47.046: INFO: Container busybox ready: true, restart count 4 Nov 26 01:20:47.046: INFO: Container csi-provisioner ready: false, restart count 4 Nov 26 01:20:47.046: INFO: Container driver-registrar ready: true, restart count 5 Nov 26 01:20:47.046: INFO: Container mock ready: true, restart count 5 Nov 26 01:20:47.046: INFO: hostexec-bootstrap-e2e-minion-group-2982-x689s started at 2022-11-26 01:13:50 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 01:20:47.046: INFO: test-container-pod started at 2022-11-26 01:15:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container webserver ready: true, restart count 0 Nov 26 01:20:47.046: INFO: test-container-pod started at 2022-11-26 01:16:01 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container webserver ready: true, restart count 1 Nov 26 01:20:47.046: INFO: external-provisioner-pm8mw started at 2022-11-26 01:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container nfs-provisioner ready: true, restart count 3 Nov 26 01:20:47.046: INFO: konnectivity-agent-kbwq2 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container konnectivity-agent ready: false, restart count 7 Nov 26 01:20:47.046: INFO: netserver-1 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container webserver ready: true, restart count 5 Nov 26 01:20:47.046: INFO: pod-subpath-test-preprovisionedpv-xdzr started at 2022-11-26 01:02:38 +0000 UTC (1+2 container statuses recorded) Nov 26 01:20:47.046: INFO: Init container init-volume-preprovisionedpv-xdzr ready: true, restart count 0 Nov 26 01:20:47.046: INFO: Container test-container-subpath-preprovisionedpv-xdzr ready: false, restart count 6 Nov 26 01:20:47.046: INFO: Container test-container-volume-preprovisionedpv-xdzr ready: false, restart count 6 Nov 26 01:20:47.046: INFO: csi-hostpathplugin-0 started at 2022-11-26 00:59:30 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:47.046: INFO: Container csi-attacher ready: true, restart count 6 Nov 26 01:20:47.046: INFO: Container csi-provisioner ready: true, restart count 6 Nov 26 01:20:47.046: INFO: Container csi-resizer ready: true, restart count 6 Nov 26 01:20:47.046: INFO: Container csi-snapshotter ready: true, restart count 6 Nov 26 01:20:47.046: INFO: Container hostpath ready: true, restart count 6 Nov 26 01:20:47.046: INFO: Container liveness-probe ready: true, restart count 6 Nov 26 01:20:47.046: INFO: Container node-driver-registrar ready: true, restart count 6 Nov 26 01:20:47.046: INFO: hostexec-bootstrap-e2e-minion-group-2982-p2ns7 started at 2022-11-26 00:59:16 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 01:20:47.046: INFO: csi-mockplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+3 container statuses recorded) Nov 26 01:20:47.046: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:20:47.046: INFO: Container driver-registrar ready: true, restart count 3 Nov 26 01:20:47.046: INFO: Container mock ready: true, restart count 3 Nov 26 01:20:47.046: INFO: ilb-host-exec started at 2022-11-26 01:12:53 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 01:20:47.046: INFO: host-test-container-pod started at 2022-11-26 01:15:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container agnhost-container ready: false, restart count 2 Nov 26 01:20:47.046: INFO: metrics-server-v0.5.2-867b8754b9-w4frb started at 2022-11-26 00:57:14 +0000 UTC (0+2 container statuses recorded) Nov 26 01:20:47.046: INFO: Container metrics-server ready: false, restart count 7 Nov 26 01:20:47.046: INFO: Container metrics-server-nanny ready: false, restart count 8 Nov 26 01:20:47.046: INFO: hostexec-bootstrap-e2e-minion-group-2982-kxg4f started at 2022-11-26 01:00:17 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container agnhost-container ready: false, restart count 3 Nov 26 01:20:47.046: INFO: hostexec-bootstrap-e2e-minion-group-2982-hqtxc started at 2022-11-26 01:02:39 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:20:47.046: INFO: pod-4db8d57c-3453-4b56-99f5-8158379eb684 started at 2022-11-26 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:20:47.046: INFO: test-hostpath-type-jf9w7 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 01:20:47.046: INFO: ss-1 started at 2022-11-26 01:02:07 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container webserver ready: true, restart count 5 Nov 26 01:20:47.046: INFO: hostexec-bootstrap-e2e-minion-group-2982-xmc6r started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 01:20:47.046: INFO: pod-a9bf9170-0527-4b88-ab1c-09ab6058409d started at 2022-11-26 01:03:43 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:47.046: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:20:47.046: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:06:57 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:47.046: INFO: Container csi-attacher ready: false, restart count 6 Nov 26 01:20:47.046: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:20:47.046: INFO: Container csi-resizer ready: false, restart count 6 Nov 26 01:20:47.046: INFO: Container csi-snapshotter ready: false, restart count 6 Nov 26 01:20:47.046: INFO: Container hostpath ready: false, restart count 6 Nov 26 01:20:47.046: INFO: Container liveness-probe ready: false, restart count 6 Nov 26 01:20:47.046: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 26 01:20:49.479: INFO: Latency metrics for node bootstrap-e2e-minion-group-2982 Nov 26 01:20:49.479: INFO: Logging node info for node bootstrap-e2e-minion-group-krkd Nov 26 01:20:49.521: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-krkd 793d73ff-a93b-4c26-a03e-336167d8e481 13086 0 2022-11-26 00:56:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-krkd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-krkd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-6742":"bootstrap-e2e-minion-group-krkd","csi-hostpath-provisioning-1838":"bootstrap-e2e-minion-group-krkd","csi-hostpath-volumemode-9128":"bootstrap-e2e-minion-group-krkd","csi-mock-csi-mock-volumes-4622":"bootstrap-e2e-minion-group-krkd"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 01:15:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 01:16:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 01:20:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-krkd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:15:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:15:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:15:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:15:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-krkd.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-krkd.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fdc8d24e89d871cca13350a32de1b46c,SystemUUID:fdc8d24e-89d8-71cc-a133-50a32de1b46c,BootID:14d1719a-3357-4298-85f2-160baff11885,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-1813^91a0fc90-6d25-11ed-88b9-c28a1eb064ec],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 01:20:49.522: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-krkd Nov 26 01:20:49.568: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-krkd Nov 26 01:20:49.631: INFO: kube-proxy-bootstrap-e2e-minion-group-krkd started at 2022-11-26 00:56:37 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.631: INFO: Container kube-proxy ready: false, restart count 8 Nov 26 01:20:49.631: INFO: netserver-2 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.631: INFO: Container webserver ready: true, restart count 3 Nov 26 01:20:49.631: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+4 container statuses recorded) Nov 26 01:20:49.631: INFO: Container busybox ready: false, restart count 6 Nov 26 01:20:49.631: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:20:49.631: INFO: Container driver-registrar ready: true, restart count 8 Nov 26 01:20:49.631: INFO: Container mock ready: true, restart count 8 Nov 26 01:20:49.631: INFO: coredns-6d97d5ddb-bw2sm started at 2022-11-26 00:57:04 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.631: INFO: Container coredns ready: false, restart count 8 Nov 26 01:20:49.631: INFO: csi-hostpathplugin-0 started at 2022-11-26 00:59:51 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:49.631: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container hostpath ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 01:20:49.631: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:14:48 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:49.631: INFO: Container csi-attacher ready: false, restart count 3 Nov 26 01:20:49.631: INFO: Container csi-provisioner ready: false, restart count 3 Nov 26 01:20:49.631: INFO: Container csi-resizer ready: false, restart count 3 Nov 26 01:20:49.631: INFO: Container csi-snapshotter ready: false, restart count 3 Nov 26 01:20:49.631: INFO: Container hostpath ready: false, restart count 3 Nov 26 01:20:49.631: INFO: Container liveness-probe ready: false, restart count 3 Nov 26 01:20:49.631: INFO: Container node-driver-registrar ready: false, restart count 3 Nov 26 01:20:49.631: INFO: netserver-2 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.631: INFO: Container webserver ready: false, restart count 7 Nov 26 01:20:49.631: INFO: konnectivity-agent-qtkxb started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.631: INFO: Container konnectivity-agent ready: true, restart count 7 Nov 26 01:20:49.631: INFO: ss-2 started at 2022-11-26 01:03:10 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.631: INFO: Container webserver ready: false, restart count 7 Nov 26 01:20:49.631: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+3 container statuses recorded) Nov 26 01:20:49.631: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 01:20:49.631: INFO: Container driver-registrar ready: false, restart count 5 Nov 26 01:20:49.631: INFO: Container mock ready: false, restart count 5 Nov 26 01:20:49.631: INFO: pvc-volume-tester-5lrn7 started at 2022-11-26 00:59:26 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.631: INFO: Container volume-tester ready: false, restart count 0 Nov 26 01:20:49.631: INFO: metadata-proxy-v0.1-qzrwj started at 2022-11-26 00:56:38 +0000 UTC (0+2 container statuses recorded) Nov 26 01:20:49.631: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:20:49.631: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:20:49.631: INFO: netserver-2 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.631: INFO: Container webserver ready: true, restart count 7 Nov 26 01:20:49.631: INFO: netserver-2 started at 2022-11-26 01:14:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.631: INFO: Container webserver ready: true, restart count 3 Nov 26 01:20:49.631: INFO: hostexec-bootstrap-e2e-minion-group-krkd-2wbgn started at 2022-11-26 01:01:34 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.631: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 01:20:49.631: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:49.631: INFO: Container csi-attacher ready: true, restart count 7 Nov 26 01:20:49.631: INFO: Container csi-provisioner ready: true, restart count 7 Nov 26 01:20:49.631: INFO: Container csi-resizer ready: true, restart count 7 Nov 26 01:20:49.631: INFO: Container csi-snapshotter ready: true, restart count 7 Nov 26 01:20:49.631: INFO: Container hostpath ready: true, restart count 7 Nov 26 01:20:49.631: INFO: Container liveness-probe ready: true, restart count 7 Nov 26 01:20:49.631: INFO: Container node-driver-registrar ready: true, restart count 7 Nov 26 01:20:49.631: INFO: hostexec-bootstrap-e2e-minion-group-krkd-4bh2r started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.631: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:20:49.631: INFO: pod-subpath-test-preprovisionedpv-snr7 started at 2022-11-26 00:59:30 +0000 UTC (1+2 container statuses recorded) Nov 26 01:20:49.631: INFO: Init container init-volume-preprovisionedpv-snr7 ready: true, restart count 6 Nov 26 01:20:49.631: INFO: Container test-container-subpath-preprovisionedpv-snr7 ready: true, restart count 8 Nov 26 01:20:49.631: INFO: Container test-container-volume-preprovisionedpv-snr7 ready: true, restart count 8 Nov 26 01:20:49.631: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+3 container statuses recorded) Nov 26 01:20:49.631: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container driver-registrar ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container mock ready: true, restart count 5 Nov 26 01:20:49.631: INFO: pod-back-off-image started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.631: INFO: Container back-off ready: false, restart count 8 Nov 26 01:20:49.631: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:48 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:49.631: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container hostpath ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 01:20:49.631: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 01:20:50.558: INFO: Latency metrics for node bootstrap-e2e-minion-group-krkd [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-9815" for this suite. 11/26/22 01:20:50.558
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/statefulset/rest.go:69 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc001682b60}, 0xc00067c500) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000c7b230, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc004ff9de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc001682b60?, 0xc004ff9e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc001682b60}, 0x3, 0x3, 0xc00067c500) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 +0x6d0 There were additional failures detected after the initial failure: [FAILED] Nov 26 01:04:04.988: Get "https://34.168.44.214/apis/apps/v1/namespaces/statefulset-2917/statefulsets": dial tcp 34.168.44.214:443: connect: connection refused In [AfterEach] at: test/e2e/framework/statefulset/rest.go:76 ---------- [FAILED] Nov 26 01:04:05.068: failed to list events in namespace "statefulset-2917": Get "https://34.168.44.214/api/v1/namespaces/statefulset-2917/events": dial tcp 34.168.44.214:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 01:04:05.108: Couldn't delete ns: "statefulset-2917": Delete "https://34.168.44.214/api/v1/namespaces/statefulset-2917": dial tcp 34.168.44.214:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.168.44.214/api/v1/namespaces/statefulset-2917", Err:(*net.OpError)(0xc00500aa00)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:00:01.458 Nov 26 01:00:01.458: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/26/22 01:00:01.459 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 01:00:01.635 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 01:00:01.726 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 STEP: Creating service test in namespace statefulset-2917 11/26/22 01:00:01.832 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/apps/statefulset.go:587 STEP: Initializing watcher for selector baz=blah,foo=bar 11/26/22 01:00:01.907 STEP: Creating stateful set ss in namespace statefulset-2917 11/26/22 01:00:01.955 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2917 11/26/22 01:00:02.009 Nov 26 01:00:02.063: INFO: Found 0 stateful pods, waiting for 1 Nov 26 01:00:12.126: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 11/26/22 01:00:12.126 Nov 26 01:00:12.184: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=statefulset-2917 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 26 01:00:13.122: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 26 01:00:13.122: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 26 01:00:13.122: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 26 01:00:13.170: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 26 01:00:23.257: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 26 01:00:23.257: INFO: Waiting for statefulset status.replicas updated to 0 Nov 26 01:00:23.492: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998805s Nov 26 01:00:24.550: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.935562184s Nov 26 01:00:25.672: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.876648046s Nov 26 01:00:26.738: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.755728169s Nov 26 01:00:27.793: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.689599318s Nov 26 01:00:52.032: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.634339264s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2917 11/26/22 01:00:53.032 Nov 26 01:00:53.075: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=statefulset-2917 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 01:00:53.449: INFO: rc: 1 Nov 26 01:00:53.449: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=statefulset-2917 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 01:01:03.450: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=statefulset-2917 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 01:01:03.573: INFO: rc: 1 Nov 26 01:01:03.573: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=statefulset-2917 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1 Nov 26 01:01:13.573: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=statefulset-2917 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 01:01:13.688: INFO: rc: 1 Nov 26 01:01:13.688: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=statefulset-2917 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1 Nov 26 01:01:23.688: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=statefulset-2917 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 01:01:23.816: INFO: rc: 1 Nov 26 01:01:23.816: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=statefulset-2917 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1 E1126 01:01:26.419567 10176 retrywatcher.go:130] "Watch failed" err="Get \"https://34.168.44.214/api/v1/namespaces/statefulset-2917/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=3083&watch=true\": dial tcp 34.168.44.214:443: connect: connection refused" E1126 01:01:27.420351 10176 retrywatcher.go:130] "Watch failed" err="Get \"https://34.168.44.214/api/v1/namespaces/statefulset-2917/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=3083&watch=true\": dial tcp 34.168.44.214:443: connect: connection refused" Nov 26 01:01:33.816: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=statefulset-2917 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 01:01:34.731: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Nov 26 01:01:34.731: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 26 01:01:34.731: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 26 01:01:34.868: INFO: Found 1 stateful pods, waiting for 3 Nov 26 01:01:44.911: INFO: Found 1 stateful pods, waiting for 3 Nov 26 01:01:54.911: INFO: Found 1 stateful pods, waiting for 3 Nov 26 01:02:04.910: INFO: Found 1 stateful pods, waiting for 3 Nov 26 01:02:14.942: INFO: Found 2 stateful pods, waiting for 3 Nov 26 01:02:24.920: INFO: Found 2 stateful pods, waiting for 3 Nov 26 01:02:34.977: INFO: Found 2 stateful pods, waiting for 3 Nov 26 01:02:44.918: INFO: Found 2 stateful pods, waiting for 3 Nov 26 01:02:55.119: INFO: Found 2 stateful pods, waiting for 3 Nov 26 01:03:04.923: INFO: Found 2 stateful pods, waiting for 3 Nov 26 01:03:14.992: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 01:03:24.940: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 01:03:34.935: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 01:03:44.940: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 01:03:54.937: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 01:04:04.909: INFO: Unexpected error: <*url.Error | 0xc00202e240>: { Op: "Get", URL: "https://34.168.44.214/api/v1/namespaces/statefulset-2917/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar", Err: <*net.OpError | 0xc00500a0f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0007c24b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 168, 44, 214], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0044f6000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 01:04:04.909: FAIL: Get "https://34.168.44.214/api/v1/namespaces/statefulset-2917/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": dial tcp 34.168.44.214:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc001682b60}, 0xc00067c500) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000c7b230, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc004ff9de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc001682b60?, 0xc004ff9e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc001682b60}, 0x3, 0x3, 0xc00067c500) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 +0x6d0 E1126 01:04:04.909904 10176 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/framework/statefulset/rest.go", LineNumber:69, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc001682b60}, 0xc00067c500)\n\ttest/e2e/framework/statefulset/rest.go:69 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000c7b230, 0x2fdb16a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc004ff9de0?, 0x262a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc001682b60?, 0xc004ff9e20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc001682b60}, 0x3, 0x3, 0xc00067c500)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)\n\ttest/e2e/framework/statefulset/wait.go:80\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.10()\n\ttest/e2e/apps/statefulset.go:643 +0x6d0", CustomMessage:""}} (�[1m�[38;5;9mYour Test Panicked�[0m �[38;5;243mtest/e2e/framework/statefulset/rest.go:69�[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. Alternatively, you may have made an assertion outside of a Ginkgo leaf node (e.g. in a container node or some out-of-band function) - please move your assertion to an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...). �[1mLearn more at:�[0m �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m ) goroutine 1285 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x70eb7e0?, 0xc000208770}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000208770?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x70eb7e0, 0xc000208770}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc00369a240, 0xb6}, {0xc0007a1540?, 0x75b521a?, 0xc0007a1560?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000df8f20, 0xa1}, {0xc0007a15d8?, 0xc000df8f20?, 0xc0007a1600?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc00202e240}, {0x0?, 0xc004e2a020?, 0x10?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc001682b60}, 0xc00067c500) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000c7b230, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc004ff9de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc001682b60?, 0xc004ff9e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc001682b60}, 0x3, 0x3, 0xc00067c500) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 +0x6d0 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc004ef0900, 0xc004eee600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 +0x1b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 +0x98 created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 +0xe3d [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Nov 26 01:04:04.949: INFO: Deleting all statefulset in ns statefulset-2917 Nov 26 01:04:04.988: INFO: Unexpected error: <*url.Error | 0xc00202e930>: { Op: "Get", URL: "https://34.168.44.214/apis/apps/v1/namespaces/statefulset-2917/statefulsets", Err: <*net.OpError | 0xc00500a410>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0007c27e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 168, 44, 214], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0044f6480>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 01:04:04.988: FAIL: Get "https://34.168.44.214/apis/apps/v1/namespaces/statefulset-2917/statefulsets": dial tcp 34.168.44.214:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x801de88, 0xc001682b60}, {0xc00202b9c0, 0x10}) test/e2e/framework/statefulset/rest.go:76 +0x113 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:129 +0x1b2 [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 26 01:04:04.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:04:05.028 STEP: Collecting events from namespace "statefulset-2917". 11/26/22 01:04:05.028 Nov 26 01:04:05.068: INFO: Unexpected error: failed to list events in namespace "statefulset-2917": <*url.Error | 0xc0007c2840>: { Op: "Get", URL: "https://34.168.44.214/api/v1/namespaces/statefulset-2917/events", Err: <*net.OpError | 0xc004ce25a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00500c540>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 168, 44, 214], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004d0a3a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 01:04:05.068: FAIL: failed to list events in namespace "statefulset-2917": Get "https://34.168.44.214/api/v1/namespaces/statefulset-2917/events": dial tcp 34.168.44.214:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0007a05c0, {0xc00202b9c0, 0x10}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc001682b60}, {0xc00202b9c0, 0x10}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0007a0650?, {0xc00202b9c0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000a2e1e0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0013e8430?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0013e8430?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-2917" for this suite. 11/26/22 01:04:05.068 Nov 26 01:04:05.108: FAIL: Couldn't delete ns: "statefulset-2917": Delete "https://34.168.44.214/api/v1/namespaces/statefulset-2917": dial tcp 34.168.44.214:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.168.44.214/api/v1/namespaces/statefulset-2917", Err:(*net.OpError)(0xc00500aa00)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000a2e1e0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0013e8380?, 0xc0000c9fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0013e8380?, 0x0?}, {0xae73300?, 0x5?, 0xc0047341c8?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\ssupport\sInClusterConfig\swith\stoken\srotation\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0008323c0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:19:26.138 Nov 26 01:19:26.138: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename svcaccounts 11/26/22 01:19:26.139 Nov 26 01:21:26.183: INFO: Unexpected error: <*fmt.wrapError | 0xc003b8a0e0>: { msg: "wait for service account \"default\" in namespace \"svcaccounts-2301\": timed out waiting for the condition", err: <*errors.errorString | 0xc000115d70>{ s: "timed out waiting for the condition", }, } Nov 26 01:21:26.184: FAIL: wait for service account "default" in namespace "svcaccounts-2301": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0008323c0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 Nov 26 01:21:26.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:21:26.226 STEP: Collecting events from namespace "svcaccounts-2301". 11/26/22 01:21:26.226 STEP: Found 0 events. 11/26/22 01:21:26.267 Nov 26 01:21:26.307: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 01:21:26.307: INFO: Nov 26 01:21:26.349: INFO: Logging node info for node bootstrap-e2e-master Nov 26 01:21:26.390: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master f052a6f7-0c51-4660-967d-6ec4c5208a42 12717 0 2022-11-26 00:56:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 01:17:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:17:28 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:17:28 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:17:28 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:17:28 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.44.214,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:df6bcb3c-a5ed-497f-83f2-74f13e952c28,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 01:21:26.390: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 01:21:26.434: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 01:21:26.483: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.483: INFO: Container kube-apiserver ready: true, restart count 3 Nov 26 01:21:26.483: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.483: INFO: Container kube-controller-manager ready: false, restart count 6 Nov 26 01:21:26.483: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 00:56:12 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.483: INFO: Container kube-addon-manager ready: true, restart count 2 Nov 26 01:21:26.483: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 00:56:13 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.483: INFO: Container l7-lb-controller ready: false, restart count 7 Nov 26 01:21:26.483: INFO: metadata-proxy-v0.1-8h6mf started at 2022-11-26 00:56:42 +0000 UTC (0+2 container statuses recorded) Nov 26 01:21:26.483: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:21:26.483: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:21:26.483: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.483: INFO: Container kube-scheduler ready: true, restart count 4 Nov 26 01:21:26.483: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.483: INFO: Container etcd-container ready: true, restart count 5 Nov 26 01:21:26.483: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.483: INFO: Container etcd-container ready: true, restart count 3 Nov 26 01:21:26.483: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.483: INFO: Container konnectivity-server-container ready: true, restart count 1 Nov 26 01:21:26.648: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 01:21:26.648: INFO: Logging node info for node bootstrap-e2e-minion-group-0hjv Nov 26 01:21:26.689: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0hjv aba0e90f-9c40-4934-aeed-e719199f0cec 13226 0 2022-11-26 00:56:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0hjv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-0hjv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-3787":"bootstrap-e2e-minion-group-0hjv","csi-hostpath-multivolume-8152":"bootstrap-e2e-minion-group-0hjv","csi-hostpath-provisioning-5652":"bootstrap-e2e-minion-group-0hjv"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 01:16:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 01:16:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 01:21:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-0hjv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:21:19 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:21:19 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:21:19 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:21:19 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.247.74.12,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0hjv.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0hjv.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f702fe377ef6bb569afbb12e0158ab5,SystemUUID:7f702fe3-77ef-6bb5-69af-bb12e0158ab5,BootID:7bec61c0-e888-4acc-a61d-e6fb73a87068,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1487^b0f6a40b-6d25-11ed-9ba3-ceb25206bbbd],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1487^b0f6a40b-6d25-11ed-9ba3-ceb25206bbbd,DevicePath:,},},Config:nil,},} Nov 26 01:21:26.689: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0hjv Nov 26 01:21:26.733: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0hjv Nov 26 01:21:26.791: INFO: volume-snapshot-controller-0 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container volume-snapshot-controller ready: false, restart count 7 Nov 26 01:21:26.791: INFO: pod-subpath-test-dynamicpv-2vf4 started at 2022-11-26 01:00:19 +0000 UTC (1+2 container statuses recorded) Nov 26 01:21:26.791: INFO: Init container init-volume-dynamicpv-2vf4 ready: true, restart count 1 Nov 26 01:21:26.791: INFO: Container test-container-subpath-dynamicpv-2vf4 ready: false, restart count 3 Nov 26 01:21:26.791: INFO: Container test-container-volume-dynamicpv-2vf4 ready: false, restart count 3 Nov 26 01:21:26.791: INFO: metadata-proxy-v0.1-8d7ds started at 2022-11-26 00:56:40 +0000 UTC (0+2 container statuses recorded) Nov 26 01:21:26.791: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:21:26.791: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:21:26.791: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-kpcm8 started at 2022-11-26 00:59:55 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container agnhost-container ready: false, restart count 5 Nov 26 01:21:26.791: INFO: netserver-0 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container webserver ready: false, restart count 6 Nov 26 01:21:26.791: INFO: pod-configmaps-cc7f33ac-2f26-44c6-ad1b-c8b91ecdfde7 started at 2022-11-26 01:02:24 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:21:26.791: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-ct8rx started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 01:21:26.791: INFO: netserver-0 started at 2022-11-26 01:14:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container webserver ready: false, restart count 3 Nov 26 01:21:26.791: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:15:34 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:26.791: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 01:21:26.791: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:21:26.791: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 01:21:26.791: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 01:21:26.791: INFO: Container hostpath ready: true, restart count 3 Nov 26 01:21:26.791: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 01:21:26.791: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 01:21:26.791: INFO: pod-subpath-test-inlinevolume-v5md started at 2022-11-26 01:00:23 +0000 UTC (1+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Init container init-volume-inlinevolume-v5md ready: true, restart count 0 Nov 26 01:21:26.791: INFO: Container test-container-subpath-inlinevolume-v5md ready: false, restart count 0 Nov 26 01:21:26.791: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:12:23 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:26.791: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 01:21:26.791: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:21:26.791: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 01:21:26.791: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 01:21:26.791: INFO: Container hostpath ready: true, restart count 3 Nov 26 01:21:26.791: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 01:21:26.791: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 01:21:26.791: INFO: l7-default-backend-8549d69d99-x8spc started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 01:21:26.791: INFO: konnectivity-agent-4brl9 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container konnectivity-agent ready: false, restart count 7 Nov 26 01:21:26.791: INFO: netserver-0 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container webserver ready: true, restart count 4 Nov 26 01:21:26.791: INFO: httpd started at 2022-11-26 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container httpd ready: true, restart count 8 Nov 26 01:21:26.791: INFO: netserver-0 started at 2022-11-26 01:06:00 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container webserver ready: false, restart count 7 Nov 26 01:21:26.791: INFO: coredns-6d97d5ddb-ghpwb started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container coredns ready: false, restart count 8 Nov 26 01:21:26.791: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:48 +0000 UTC (0+4 container statuses recorded) Nov 26 01:21:26.791: INFO: Container busybox ready: false, restart count 6 Nov 26 01:21:26.791: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:21:26.791: INFO: Container driver-registrar ready: false, restart count 7 Nov 26 01:21:26.791: INFO: Container mock ready: false, restart count 7 Nov 26 01:21:26.791: INFO: ss-0 started at 2022-11-26 01:00:02 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container webserver ready: false, restart count 11 Nov 26 01:21:26.791: INFO: lb-sourcerange-n4k92 started at 2022-11-26 01:00:10 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container netexec ready: false, restart count 7 Nov 26 01:21:26.791: INFO: execpod-dropdkfjx started at 2022-11-26 01:00:02 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:21:26.791: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-5md2t started at 2022-11-26 01:03:01 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:21:26.791: INFO: kube-dns-autoscaler-5f6455f985-2brqn started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container autoscaler ready: false, restart count 8 Nov 26 01:21:26.791: INFO: execpod-acceptfj5ts started at 2022-11-26 00:59:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 01:21:26.791: INFO: netserver-0 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container webserver ready: true, restart count 2 Nov 26 01:21:26.791: INFO: kube-proxy-bootstrap-e2e-minion-group-0hjv started at 2022-11-26 00:56:39 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container kube-proxy ready: false, restart count 8 Nov 26 01:21:26.791: INFO: pod-d647abcb-295b-4ba3-bb3b-72f4c6f3de02 started at 2022-11-26 00:59:12 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:21:26.791: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-bkkbv started at 2022-11-26 01:03:25 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container agnhost-container ready: false, restart count 6 Nov 26 01:21:26.791: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:12:52 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:26.791: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 01:21:26.791: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 01:21:26.791: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 01:21:26.791: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 01:21:26.791: INFO: Container hostpath ready: true, restart count 2 Nov 26 01:21:26.791: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 01:21:26.791: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 01:21:26.791: INFO: pod-configmaps-a8d056c0-ff53-45cb-8c13-ec73b1032b04 started at 2022-11-26 01:00:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:26.791: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:21:27.038: INFO: Latency metrics for node bootstrap-e2e-minion-group-0hjv Nov 26 01:21:27.038: INFO: Logging node info for node bootstrap-e2e-minion-group-2982 Nov 26 01:21:27.080: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2982 23ac061c-c1e5-4314-9c38-31fd0e0866cb 13175 0 2022-11-26 00:56:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2982 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-2982 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-3663":"bootstrap-e2e-minion-group-2982","csi-hostpath-multivolume-9512":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-2174":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-2301":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-8735":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-9114":"bootstrap-e2e-minion-group-2982","csi-hostpath-volumemode-9250":"bootstrap-e2e-minion-group-2982","csi-mock-csi-mock-volumes-8838":"csi-mock-csi-mock-volumes-8838","csi-mock-csi-mock-volumes-9268":"bootstrap-e2e-minion-group-2982"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 01:13:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 01:16:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 01:20:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-2982,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:19:16 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:19:16 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:19:16 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:19:16 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.83.251.2,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2982.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2982.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2696a1914e0c43baf9af45da97c22a96,SystemUUID:2696a191-4e0c-43ba-f9af-45da97c22a96,BootID:100bea17-3104-47ce-b900-733cee1dfe77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-7474^8eb7794d-6d25-11ed-9bf8-7ec81e6e10fe kubernetes.io/csi/csi-hostpath-provisioning-9114^19d7a84a-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19d993ab-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19dac6af-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19e2bbbd-6d26-11ed-82a4-7efb5be84aec],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19d7a84a-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19dac6af-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19d993ab-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7474^8eb7794d-6d25-11ed-9bf8-7ec81e6e10fe,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19e2bbbd-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},},Config:nil,},} Nov 26 01:21:27.080: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2982 Nov 26 01:21:27.124: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2982 Nov 26 01:21:27.198: INFO: ss-1 started at 2022-11-26 01:02:07 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container webserver ready: true, restart count 5 Nov 26 01:21:27.198: INFO: hostexec-bootstrap-e2e-minion-group-2982-xmc6r started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 01:21:27.198: INFO: pod-a9bf9170-0527-4b88-ab1c-09ab6058409d started at 2022-11-26 01:03:43 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:21:27.198: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:06:57 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:27.198: INFO: Container csi-attacher ready: false, restart count 6 Nov 26 01:21:27.198: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:21:27.198: INFO: Container csi-resizer ready: false, restart count 6 Nov 26 01:21:27.198: INFO: Container csi-snapshotter ready: false, restart count 6 Nov 26 01:21:27.198: INFO: Container hostpath ready: false, restart count 6 Nov 26 01:21:27.198: INFO: Container liveness-probe ready: false, restart count 6 Nov 26 01:21:27.198: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 26 01:21:27.198: INFO: pod-subpath-test-inlinevolume-wppj started at 2022-11-26 00:59:05 +0000 UTC (1+2 container statuses recorded) Nov 26 01:21:27.198: INFO: Init container init-volume-inlinevolume-wppj ready: true, restart count 0 Nov 26 01:21:27.198: INFO: Container test-container-subpath-inlinevolume-wppj ready: true, restart count 9 Nov 26 01:21:27.198: INFO: Container test-container-volume-inlinevolume-wppj ready: false, restart count 6 Nov 26 01:21:27.198: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:13:00 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:27.198: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container hostpath ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 01:21:27.198: INFO: external-local-nodeport-hpnxr started at 2022-11-26 01:00:15 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container netexec ready: true, restart count 5 Nov 26 01:21:27.198: INFO: hostpath-3-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container hostpath-3-client ready: true, restart count 3 Nov 26 01:21:27.198: INFO: netserver-1 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container webserver ready: true, restart count 0 Nov 26 01:21:27.198: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:27.198: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container hostpath ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 01:21:27.198: INFO: back-off-cap started at 2022-11-26 01:08:51 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container back-off-cap ready: false, restart count 7 Nov 26 01:21:27.198: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:07 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:27.198: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 01:21:27.198: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:21:27.198: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 01:21:27.198: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 01:21:27.198: INFO: Container hostpath ready: true, restart count 5 Nov 26 01:21:27.198: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 01:21:27.198: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 01:21:27.198: INFO: kube-proxy-bootstrap-e2e-minion-group-2982 started at 2022-11-26 00:56:38 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container kube-proxy ready: true, restart count 8 Nov 26 01:21:27.198: INFO: pod-configmaps-0039d476-e3ec-4d1f-95a0-589475853cfc started at 2022-11-26 01:02:20 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:21:27.198: INFO: hostexec-bootstrap-e2e-minion-group-2982-262gq started at 2022-11-26 00:59:26 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:21:27.198: INFO: hostexec-bootstrap-e2e-minion-group-2982-xrccm started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 01:21:27.198: INFO: pod-bed0f594-e6f2-4d1d-b243-e6b3a7adfbf2 started at 2022-11-26 01:03:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:21:27.198: INFO: var-expansion-8d1d368e-67cd-4a67-b256-8d870f10a0e2 started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container dapi-container ready: false, restart count 0 Nov 26 01:21:27.198: INFO: hostexec-bootstrap-e2e-minion-group-2982-fm6cq started at 2022-11-26 01:03:21 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:21:27.198: INFO: metadata-proxy-v0.1-2rxjj started at 2022-11-26 00:56:38 +0000 UTC (0+2 container statuses recorded) Nov 26 01:21:27.198: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:21:27.198: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:21:27.198: INFO: external-local-update-rfn9p started at 2022-11-26 01:03:24 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container netexec ready: true, restart count 1 Nov 26 01:21:27.198: INFO: netserver-1 started at 2022-11-26 01:14:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container webserver ready: true, restart count 5 Nov 26 01:21:27.198: INFO: test-hostpath-type-9bw9n started at 2022-11-26 01:16:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 01:21:27.198: INFO: pod-subpath-test-preprovisionedpv-mkpm started at 2022-11-26 01:02:54 +0000 UTC (1+2 container statuses recorded) Nov 26 01:21:27.198: INFO: Init container init-volume-preprovisionedpv-mkpm ready: true, restart count 0 Nov 26 01:21:27.198: INFO: Container test-container-subpath-preprovisionedpv-mkpm ready: false, restart count 6 Nov 26 01:21:27.198: INFO: Container test-container-volume-preprovisionedpv-mkpm ready: false, restart count 6 Nov 26 01:21:27.198: INFO: hostpath-1-client started at 2022-11-26 01:03:13 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container hostpath-1-client ready: true, restart count 2 Nov 26 01:21:27.198: INFO: test-hostpath-type-lgxhw started at 2022-11-26 01:16:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container host-path-sh-testing ready: false, restart count 0 Nov 26 01:21:27.198: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:21 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:27.198: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container hostpath ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 01:21:27.198: INFO: pod-5be3eec2-e823-4f42-901c-fd502ef8f0d6 started at 2022-11-26 00:59:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:21:27.198: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:13:00 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:27.198: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 01:21:27.198: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 01:21:27.198: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 01:21:27.198: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 01:21:27.198: INFO: Container hostpath ready: true, restart count 2 Nov 26 01:21:27.198: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 01:21:27.198: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 01:21:27.198: INFO: hostpath-2-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container hostpath-2-client ready: true, restart count 2 Nov 26 01:21:27.198: INFO: netserver-1 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container webserver ready: false, restart count 6 Nov 26 01:21:27.198: INFO: hostpath-0-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container hostpath-0-client ready: true, restart count 4 Nov 26 01:21:27.198: INFO: hostexec-bootstrap-e2e-minion-group-2982-n9wzs started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 01:21:27.198: INFO: pod-subpath-test-inlinevolume-7tmj started at 2022-11-26 01:03:45 +0000 UTC (1+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Init container init-volume-inlinevolume-7tmj ready: true, restart count 0 Nov 26 01:21:27.198: INFO: Container test-container-subpath-inlinevolume-7tmj ready: false, restart count 0 Nov 26 01:21:27.198: INFO: lb-internal-8mn52 started at 2022-11-26 01:06:44 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container netexec ready: false, restart count 6 Nov 26 01:21:27.198: INFO: csi-mockplugin-0 started at 2022-11-26 01:13:39 +0000 UTC (0+4 container statuses recorded) Nov 26 01:21:27.198: INFO: Container busybox ready: true, restart count 4 Nov 26 01:21:27.198: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:21:27.198: INFO: Container driver-registrar ready: true, restart count 5 Nov 26 01:21:27.198: INFO: Container mock ready: true, restart count 5 Nov 26 01:21:27.198: INFO: hostexec-bootstrap-e2e-minion-group-2982-x689s started at 2022-11-26 01:13:50 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 01:21:27.198: INFO: test-container-pod started at 2022-11-26 01:15:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container webserver ready: true, restart count 0 Nov 26 01:21:27.198: INFO: test-container-pod started at 2022-11-26 01:16:01 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container webserver ready: true, restart count 1 Nov 26 01:21:27.198: INFO: external-provisioner-pm8mw started at 2022-11-26 01:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container nfs-provisioner ready: true, restart count 3 Nov 26 01:21:27.198: INFO: konnectivity-agent-kbwq2 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container konnectivity-agent ready: false, restart count 7 Nov 26 01:21:27.198: INFO: netserver-1 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container webserver ready: false, restart count 6 Nov 26 01:21:27.198: INFO: pod-subpath-test-preprovisionedpv-xdzr started at 2022-11-26 01:02:38 +0000 UTC (1+2 container statuses recorded) Nov 26 01:21:27.198: INFO: Init container init-volume-preprovisionedpv-xdzr ready: true, restart count 0 Nov 26 01:21:27.198: INFO: Container test-container-subpath-preprovisionedpv-xdzr ready: false, restart count 6 Nov 26 01:21:27.198: INFO: Container test-container-volume-preprovisionedpv-xdzr ready: false, restart count 6 Nov 26 01:21:27.198: INFO: csi-hostpathplugin-0 started at 2022-11-26 00:59:30 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:27.198: INFO: Container csi-attacher ready: true, restart count 6 Nov 26 01:21:27.198: INFO: Container csi-provisioner ready: true, restart count 6 Nov 26 01:21:27.198: INFO: Container csi-resizer ready: true, restart count 6 Nov 26 01:21:27.198: INFO: Container csi-snapshotter ready: true, restart count 6 Nov 26 01:21:27.198: INFO: Container hostpath ready: true, restart count 6 Nov 26 01:21:27.198: INFO: Container liveness-probe ready: true, restart count 6 Nov 26 01:21:27.198: INFO: Container node-driver-registrar ready: true, restart count 6 Nov 26 01:21:27.198: INFO: hostexec-bootstrap-e2e-minion-group-2982-p2ns7 started at 2022-11-26 00:59:16 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 01:21:27.198: INFO: csi-mockplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+3 container statuses recorded) Nov 26 01:21:27.198: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:21:27.198: INFO: Container driver-registrar ready: true, restart count 3 Nov 26 01:21:27.198: INFO: Container mock ready: true, restart count 3 Nov 26 01:21:27.198: INFO: ilb-host-exec started at 2022-11-26 01:12:53 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container agnhost-container ready: false, restart count 1 Nov 26 01:21:27.198: INFO: host-test-container-pod started at 2022-11-26 01:15:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:21:27.198: INFO: metrics-server-v0.5.2-867b8754b9-w4frb started at 2022-11-26 00:57:14 +0000 UTC (0+2 container statuses recorded) Nov 26 01:21:27.198: INFO: Container metrics-server ready: false, restart count 7 Nov 26 01:21:27.198: INFO: Container metrics-server-nanny ready: false, restart count 9 Nov 26 01:21:27.198: INFO: hostexec-bootstrap-e2e-minion-group-2982-kxg4f started at 2022-11-26 01:00:17 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 01:21:27.198: INFO: hostexec-bootstrap-e2e-minion-group-2982-hqtxc started at 2022-11-26 01:02:39 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:21:27.198: INFO: pod-4db8d57c-3453-4b56-99f5-8158379eb684 started at 2022-11-26 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:21:27.198: INFO: test-hostpath-type-jf9w7 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.198: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 01:21:27.695: INFO: Latency metrics for node bootstrap-e2e-minion-group-2982 Nov 26 01:21:27.695: INFO: Logging node info for node bootstrap-e2e-minion-group-krkd Nov 26 01:21:27.738: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-krkd 793d73ff-a93b-4c26-a03e-336167d8e481 13237 0 2022-11-26 00:56:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-krkd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-krkd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-6742":"bootstrap-e2e-minion-group-krkd","csi-hostpath-provisioning-1838":"bootstrap-e2e-minion-group-krkd","csi-hostpath-volumemode-9128":"bootstrap-e2e-minion-group-krkd","csi-mock-csi-mock-volumes-4622":"bootstrap-e2e-minion-group-krkd"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 01:15:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 01:16:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 01:21:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-krkd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:20:56 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:20:56 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:20:56 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:20:56 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-krkd.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-krkd.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fdc8d24e89d871cca13350a32de1b46c,SystemUUID:fdc8d24e-89d8-71cc-a133-50a32de1b46c,BootID:14d1719a-3357-4298-85f2-160baff11885,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-1813^91a0fc90-6d25-11ed-88b9-c28a1eb064ec],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 01:21:27.738: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-krkd Nov 26 01:21:27.784: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-krkd Nov 26 01:21:27.853: INFO: hostexec-bootstrap-e2e-minion-group-krkd-2wbgn started at 2022-11-26 01:01:34 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.853: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 01:21:27.853: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:27.853: INFO: Container csi-attacher ready: true, restart count 7 Nov 26 01:21:27.853: INFO: Container csi-provisioner ready: true, restart count 7 Nov 26 01:21:27.853: INFO: Container csi-resizer ready: true, restart count 7 Nov 26 01:21:27.853: INFO: Container csi-snapshotter ready: true, restart count 7 Nov 26 01:21:27.853: INFO: Container hostpath ready: true, restart count 7 Nov 26 01:21:27.853: INFO: Container liveness-probe ready: true, restart count 7 Nov 26 01:21:27.853: INFO: Container node-driver-registrar ready: true, restart count 7 Nov 26 01:21:27.853: INFO: hostexec-bootstrap-e2e-minion-group-krkd-4bh2r started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.853: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:21:27.853: INFO: pod-subpath-test-preprovisionedpv-snr7 started at 2022-11-26 00:59:30 +0000 UTC (1+2 container statuses recorded) Nov 26 01:21:27.853: INFO: Init container init-volume-preprovisionedpv-snr7 ready: true, restart count 6 Nov 26 01:21:27.853: INFO: Container test-container-subpath-preprovisionedpv-snr7 ready: true, restart count 8 Nov 26 01:21:27.853: INFO: Container test-container-volume-preprovisionedpv-snr7 ready: true, restart count 8 Nov 26 01:21:27.853: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+3 container statuses recorded) Nov 26 01:21:27.853: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container driver-registrar ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container mock ready: true, restart count 5 Nov 26 01:21:27.853: INFO: pod-back-off-image started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.853: INFO: Container back-off ready: false, restart count 8 Nov 26 01:21:27.853: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:48 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:27.853: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container hostpath ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 01:21:27.853: INFO: kube-proxy-bootstrap-e2e-minion-group-krkd started at 2022-11-26 00:56:37 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.853: INFO: Container kube-proxy ready: false, restart count 8 Nov 26 01:21:27.853: INFO: netserver-2 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.853: INFO: Container webserver ready: true, restart count 3 Nov 26 01:21:27.853: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+4 container statuses recorded) Nov 26 01:21:27.853: INFO: Container busybox ready: false, restart count 6 Nov 26 01:21:27.853: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:21:27.853: INFO: Container driver-registrar ready: false, restart count 8 Nov 26 01:21:27.853: INFO: Container mock ready: false, restart count 8 Nov 26 01:21:27.853: INFO: coredns-6d97d5ddb-bw2sm started at 2022-11-26 00:57:04 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.853: INFO: Container coredns ready: false, restart count 8 Nov 26 01:21:27.853: INFO: csi-hostpathplugin-0 started at 2022-11-26 00:59:51 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:27.853: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container hostpath ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 01:21:27.853: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 01:21:27.853: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:14:48 +0000 UTC (0+7 container statuses recorded) Nov 26 01:21:27.853: INFO: Container csi-attacher ready: false, restart count 4 Nov 26 01:21:27.853: INFO: Container csi-provisioner ready: false, restart count 4 Nov 26 01:21:27.853: INFO: Container csi-resizer ready: false, restart count 4 Nov 26 01:21:27.853: INFO: Container csi-snapshotter ready: false, restart count 4 Nov 26 01:21:27.853: INFO: Container hostpath ready: false, restart count 4 Nov 26 01:21:27.853: INFO: Container liveness-probe ready: false, restart count 4 Nov 26 01:21:27.853: INFO: Container node-driver-registrar ready: false, restart count 4 Nov 26 01:21:27.853: INFO: netserver-2 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.853: INFO: Container webserver ready: false, restart count 7 Nov 26 01:21:27.853: INFO: konnectivity-agent-qtkxb started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.853: INFO: Container konnectivity-agent ready: true, restart count 7 Nov 26 01:21:27.853: INFO: ss-2 started at 2022-11-26 01:03:10 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.853: INFO: Container webserver ready: false, restart count 7 Nov 26 01:21:27.853: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+3 container statuses recorded) Nov 26 01:21:27.853: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 01:21:27.853: INFO: Container driver-registrar ready: false, restart count 5 Nov 26 01:21:27.853: INFO: Container mock ready: false, restart count 5 Nov 26 01:21:27.853: INFO: pvc-volume-tester-5lrn7 started at 2022-11-26 00:59:26 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.853: INFO: Container volume-tester ready: false, restart count 0 Nov 26 01:21:27.853: INFO: metadata-proxy-v0.1-qzrwj started at 2022-11-26 00:56:38 +0000 UTC (0+2 container statuses recorded) Nov 26 01:21:27.853: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:21:27.853: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:21:27.853: INFO: netserver-2 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.853: INFO: Container webserver ready: true, restart count 7 Nov 26 01:21:27.853: INFO: netserver-2 started at 2022-11-26 01:14:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:21:27.853: INFO: Container webserver ready: true, restart count 3 Nov 26 01:21:28.132: INFO: Latency metrics for node bootstrap-e2e-minion-group-krkd [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 STEP: Destroying namespace "svcaccounts-2301" for this suite. 11/26/22 01:21:28.132
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never$'
test/e2e/kubectl/kubectl.go:415 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 There were additional failures detected after the initial failure: [FAILED] Nov 26 01:03:59.025: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2875 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.168.44.214/api/v1/namespaces/kubectl-2875/pods/httpd": dial tcp 34.168.44.214:443: connect: connection refused error: exit status 1 In [AfterEach] at: test/e2e/framework/kubectl/builder.go:87 ---------- [FAILED] Nov 26 01:03:59.105: failed to list events in namespace "kubectl-2875": Get "https://34.168.44.214/api/v1/namespaces/kubectl-2875/events": dial tcp 34.168.44.214:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 01:03:59.145: Couldn't delete ns: "kubectl-2875": Delete "https://34.168.44.214/api/v1/namespaces/kubectl-2875": dial tcp 34.168.44.214:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.168.44.214/api/v1/namespaces/kubectl-2875", Err:(*net.OpError)(0xc002bb89b0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:02:51.572 Nov 26 01:02:51.572: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 01:02:51.574 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 01:02:51.791 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 01:02:51.898 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/26/22 01:02:51.994 Nov 26 01:02:51.994: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2875 create -f -' Nov 26 01:02:52.754: INFO: stderr: "" Nov 26 01:02:52.754: INFO: stdout: "pod/httpd created\n" Nov 26 01:02:52.754: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 26 01:02:52.754: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-2875" to be "running and ready" Nov 26 01:02:52.855: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 101.251715ms Nov 26 01:02:52.855: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' to be 'Running' but was 'Pending' Nov 26 01:02:54.921: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.16739945s Nov 26 01:02:54.921: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:02:56.927: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.173144327s Nov 26 01:02:56.927: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:02:58.935: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.181483506s Nov 26 01:02:58.936: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:00.906: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.151876576s Nov 26 01:03:00.906: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:02.932: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.177984537s Nov 26 01:03:02.932: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:04.922: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.167860935s Nov 26 01:03:04.922: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:06.913: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.159457422s Nov 26 01:03:06.914: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:08.921: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.167063498s Nov 26 01:03:08.921: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:10.919: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.164802161s Nov 26 01:03:10.919: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:13.015: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.26088525s Nov 26 01:03:13.015: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:14.957: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.202759626s Nov 26 01:03:14.957: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:16.920: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.165813691s Nov 26 01:03:16.920: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:18.914: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.160300574s Nov 26 01:03:18.914: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:20.914: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 28.159880933s Nov 26 01:03:20.914: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:22.916: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 30.162178672s Nov 26 01:03:22.916: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:24.916: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 32.162370785s Nov 26 01:03:24.916: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:26.912: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 34.157638943s Nov 26 01:03:26.912: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:28.975: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 36.220853033s Nov 26 01:03:28.975: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:30.920: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 38.16593964s Nov 26 01:03:30.920: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:32.910: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 40.155729543s Nov 26 01:03:32.910: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:34.934: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 42.180251499s Nov 26 01:03:34.934: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:36.914: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 44.160222392s Nov 26 01:03:36.914: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:38.916: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 46.162227793s Nov 26 01:03:38.916: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:40.920: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 48.165951958s Nov 26 01:03:40.920: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:42.947: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 50.192533244s Nov 26 01:03:42.947: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:44.913: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 52.158750775s Nov 26 01:03:44.913: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:46.911: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 54.157118891s Nov 26 01:03:46.911: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:48.917: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 56.163320986s Nov 26 01:03:48.917: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:50.934: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 58.179937718s Nov 26 01:03:50.934: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:52.916: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.162344562s Nov 26 01:03:52.916: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:54.936: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.182265794s Nov 26 01:03:54.936: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:56.929: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.174652973s Nov 26 01:03:56.929: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-0hjv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:02:52 +0000 UTC }] Nov 26 01:03:58.895: INFO: Encountered non-retryable error while getting pod kubectl-2875/httpd: Get "https://34.168.44.214/api/v1/namespaces/kubectl-2875/pods/httpd": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:03:58.896: INFO: Pod httpd failed to be running and ready. Nov 26 01:03:58.896: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [httpd] Nov 26 01:03:58.896: FAIL: Expected <bool>: false to equal <bool>: true Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/26/22 01:03:58.896 Nov 26 01:03:58.896: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2875 delete --grace-period=0 --force -f -' Nov 26 01:03:59.025: INFO: rc: 1 Nov 26 01:03:59.025: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc002809380>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2875 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://34.168.44.214/api/v1/namespaces/kubectl-2875/pods/httpd\": dial tcp 34.168.44.214:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 26 01:03:59.025: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2875 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.168.44.214/api/v1/namespaces/kubectl-2875/pods/httpd": dial tcp 34.168.44.214:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc002bb11e0?, 0x0?}, {0xc005099da0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc005099da0, 0xc}, {0xc001b809a0, 0x145}, {0xc000bcfec0?, 0x8?, 0x7fc96205c108?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc001b809a0, 0x145}, {0xc005099da0, 0xc}, {0xc002b82b50, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 01:03:59.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:03:59.065 STEP: Collecting events from namespace "kubectl-2875". 11/26/22 01:03:59.065 Nov 26 01:03:59.105: INFO: Unexpected error: failed to list events in namespace "kubectl-2875": <*url.Error | 0xc002243890>: { Op: "Get", URL: "https://34.168.44.214/api/v1/namespaces/kubectl-2875/events", Err: <*net.OpError | 0xc00220ed20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002b578c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 168, 44, 214], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0006e7500>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 01:03:59.105: FAIL: failed to list events in namespace "kubectl-2875": Get "https://34.168.44.214/api/v1/namespaces/kubectl-2875/events": dial tcp 34.168.44.214:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0047725c0, {0xc005099da0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc001a7b6c0}, {0xc005099da0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc004772650?, {0xc005099da0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000fa42d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0050946b0?, 0xc003d01fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc000dc3748?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0050946b0?, 0x29449fc?}, {0xae73300?, 0xc003d01f80?, 0xc003c22570?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-2875" for this suite. 11/26/22 01:03:59.105 Nov 26 01:03:59.145: FAIL: Couldn't delete ns: "kubectl-2875": Delete "https://34.168.44.214/api/v1/namespaces/kubectl-2875": dial tcp 34.168.44.214:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.168.44.214/api/v1/namespaces/kubectl-2875", Err:(*net.OpError)(0xc002bb89b0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000fa42d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc005094630?, 0x6d756c6f56206565?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x2d656367203a6570?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc005094630?, 0x627573205d73776f?}, {0xae73300?, 0x6f6d6e7520646c75?, 0x7020666920746e75?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never\,\sbut\swith\s\-\-rm$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00114aff0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:18:29.507 Nov 26 01:18:29.507: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 01:18:29.509 Nov 26 01:18:29.549: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:31.589: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:33.589: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:35.589: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:37.588: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:39.588: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:20:44.397: INFO: Unexpected error: <*fmt.wrapError | 0xc001292780>: { msg: "wait for service account \"default\" in namespace \"kubectl-1226\": timed out waiting for the condition", err: <*errors.errorString | 0xc0001c99e0>{ s: "timed out waiting for the condition", }, } Nov 26 01:20:44.397: FAIL: wait for service account "default" in namespace "kubectl-1226": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00114aff0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 01:20:44.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:20:44.48 STEP: Collecting events from namespace "kubectl-1226". 11/26/22 01:20:44.48 STEP: Found 0 events. 11/26/22 01:20:44.521 Nov 26 01:20:44.562: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 01:20:44.562: INFO: Nov 26 01:20:44.605: INFO: Logging node info for node bootstrap-e2e-master Nov 26 01:20:44.654: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master f052a6f7-0c51-4660-967d-6ec4c5208a42 12717 0 2022-11-26 00:56:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 01:17:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:17:28 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:17:28 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:17:28 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:17:28 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.44.214,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:df6bcb3c-a5ed-497f-83f2-74f13e952c28,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 01:20:44.655: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 01:20:44.704: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 01:20:44.758: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:44.758: INFO: Container kube-apiserver ready: true, restart count 3 Nov 26 01:20:44.758: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:44.758: INFO: Container kube-controller-manager ready: false, restart count 6 Nov 26 01:20:44.758: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 00:56:12 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:44.758: INFO: Container kube-addon-manager ready: true, restart count 2 Nov 26 01:20:44.758: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 00:56:13 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:44.758: INFO: Container l7-lb-controller ready: false, restart count 7 Nov 26 01:20:44.758: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:44.758: INFO: Container kube-scheduler ready: true, restart count 4 Nov 26 01:20:44.758: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:44.758: INFO: Container etcd-container ready: true, restart count 5 Nov 26 01:20:44.758: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:44.758: INFO: Container etcd-container ready: true, restart count 3 Nov 26 01:20:44.758: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:44.758: INFO: Container konnectivity-server-container ready: true, restart count 1 Nov 26 01:20:44.758: INFO: metadata-proxy-v0.1-8h6mf started at 2022-11-26 00:56:42 +0000 UTC (0+2 container statuses recorded) Nov 26 01:20:44.758: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:20:44.758: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:20:45.348: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 01:20:45.348: INFO: Logging node info for node bootstrap-e2e-minion-group-0hjv Nov 26 01:20:45.399: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0hjv aba0e90f-9c40-4934-aeed-e719199f0cec 13108 0 2022-11-26 00:56:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0hjv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-0hjv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8152":"bootstrap-e2e-minion-group-0hjv","csi-hostpath-provisioning-5652":"bootstrap-e2e-minion-group-0hjv"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 01:16:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 01:16:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 01:20:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-0hjv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:16:13 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:16:13 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:16:13 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:16:13 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.247.74.12,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0hjv.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0hjv.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f702fe377ef6bb569afbb12e0158ab5,SystemUUID:7f702fe3-77ef-6bb5-69af-bb12e0158ab5,BootID:7bec61c0-e888-4acc-a61d-e6fb73a87068,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1487^b0f6a40b-6d25-11ed-9ba3-ceb25206bbbd],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1487^b0f6a40b-6d25-11ed-9ba3-ceb25206bbbd,DevicePath:,},},Config:nil,},} Nov 26 01:20:45.400: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0hjv Nov 26 01:20:45.701: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0hjv Nov 26 01:20:46.024: INFO: kube-proxy-bootstrap-e2e-minion-group-0hjv started at 2022-11-26 00:56:39 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container kube-proxy ready: false, restart count 8 Nov 26 01:20:46.024: INFO: kube-dns-autoscaler-5f6455f985-2brqn started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container autoscaler ready: false, restart count 8 Nov 26 01:20:46.024: INFO: execpod-acceptfj5ts started at 2022-11-26 00:59:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container agnhost-container ready: false, restart count 3 Nov 26 01:20:46.024: INFO: netserver-0 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container webserver ready: true, restart count 2 Nov 26 01:20:46.024: INFO: pod-configmaps-a8d056c0-ff53-45cb-8c13-ec73b1032b04 started at 2022-11-26 01:00:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:20:46.024: INFO: pod-d647abcb-295b-4ba3-bb3b-72f4c6f3de02 started at 2022-11-26 00:59:12 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:20:46.024: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-bkkbv started at 2022-11-26 01:03:25 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container agnhost-container ready: false, restart count 6 Nov 26 01:20:46.024: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:12:52 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.024: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 01:20:46.024: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 01:20:46.024: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 01:20:46.024: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 01:20:46.024: INFO: Container hostpath ready: true, restart count 2 Nov 26 01:20:46.024: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 01:20:46.024: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 01:20:46.024: INFO: metadata-proxy-v0.1-8d7ds started at 2022-11-26 00:56:40 +0000 UTC (0+2 container statuses recorded) Nov 26 01:20:46.024: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:20:46.024: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:20:46.024: INFO: volume-snapshot-controller-0 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container volume-snapshot-controller ready: false, restart count 7 Nov 26 01:20:46.024: INFO: pod-subpath-test-dynamicpv-2vf4 started at 2022-11-26 01:00:19 +0000 UTC (1+2 container statuses recorded) Nov 26 01:20:46.024: INFO: Init container init-volume-dynamicpv-2vf4 ready: true, restart count 1 Nov 26 01:20:46.024: INFO: Container test-container-subpath-dynamicpv-2vf4 ready: false, restart count 3 Nov 26 01:20:46.024: INFO: Container test-container-volume-dynamicpv-2vf4 ready: false, restart count 3 Nov 26 01:20:46.024: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-kpcm8 started at 2022-11-26 00:59:55 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container agnhost-container ready: false, restart count 5 Nov 26 01:20:46.024: INFO: netserver-0 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container webserver ready: false, restart count 6 Nov 26 01:20:46.024: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-ct8rx started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 01:20:46.024: INFO: pod-configmaps-cc7f33ac-2f26-44c6-ad1b-c8b91ecdfde7 started at 2022-11-26 01:02:24 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:20:46.024: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:12:23 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.024: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 01:20:46.024: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:20:46.024: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 01:20:46.024: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 01:20:46.024: INFO: Container hostpath ready: true, restart count 3 Nov 26 01:20:46.024: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 01:20:46.024: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 01:20:46.024: INFO: l7-default-backend-8549d69d99-x8spc started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 01:20:46.024: INFO: netserver-0 started at 2022-11-26 01:14:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container webserver ready: false, restart count 3 Nov 26 01:20:46.024: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:15:34 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.024: INFO: Container csi-attacher ready: false, restart count 2 Nov 26 01:20:46.024: INFO: Container csi-provisioner ready: false, restart count 2 Nov 26 01:20:46.024: INFO: Container csi-resizer ready: false, restart count 2 Nov 26 01:20:46.024: INFO: Container csi-snapshotter ready: false, restart count 2 Nov 26 01:20:46.024: INFO: Container hostpath ready: false, restart count 2 Nov 26 01:20:46.024: INFO: Container liveness-probe ready: false, restart count 2 Nov 26 01:20:46.024: INFO: Container node-driver-registrar ready: false, restart count 2 Nov 26 01:20:46.024: INFO: pod-subpath-test-inlinevolume-v5md started at 2022-11-26 01:00:23 +0000 UTC (1+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Init container init-volume-inlinevolume-v5md ready: true, restart count 0 Nov 26 01:20:46.024: INFO: Container test-container-subpath-inlinevolume-v5md ready: false, restart count 0 Nov 26 01:20:46.024: INFO: netserver-0 started at 2022-11-26 01:06:00 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container webserver ready: false, restart count 6 Nov 26 01:20:46.024: INFO: coredns-6d97d5ddb-ghpwb started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container coredns ready: false, restart count 8 Nov 26 01:20:46.024: INFO: konnectivity-agent-4brl9 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container konnectivity-agent ready: false, restart count 7 Nov 26 01:20:46.024: INFO: netserver-0 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container webserver ready: true, restart count 4 Nov 26 01:20:46.024: INFO: httpd started at 2022-11-26 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container httpd ready: true, restart count 8 Nov 26 01:20:46.024: INFO: execpod-dropdkfjx started at 2022-11-26 01:00:02 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:20:46.024: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-5md2t started at 2022-11-26 01:03:01 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:20:46.024: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:48 +0000 UTC (0+4 container statuses recorded) Nov 26 01:20:46.024: INFO: Container busybox ready: false, restart count 6 Nov 26 01:20:46.024: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:20:46.024: INFO: Container driver-registrar ready: false, restart count 7 Nov 26 01:20:46.024: INFO: Container mock ready: false, restart count 7 Nov 26 01:20:46.024: INFO: ss-0 started at 2022-11-26 01:00:02 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container webserver ready: false, restart count 11 Nov 26 01:20:46.024: INFO: lb-sourcerange-n4k92 started at 2022-11-26 01:00:10 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.024: INFO: Container netexec ready: false, restart count 7 Nov 26 01:20:46.725: INFO: Latency metrics for node bootstrap-e2e-minion-group-0hjv Nov 26 01:20:46.725: INFO: Logging node info for node bootstrap-e2e-minion-group-2982 Nov 26 01:20:46.769: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2982 23ac061c-c1e5-4314-9c38-31fd0e0866cb 13036 0 2022-11-26 00:56:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2982 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-2982 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-9512":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-2174":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-2301":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-8735":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-9114":"bootstrap-e2e-minion-group-2982","csi-hostpath-volumemode-9250":"bootstrap-e2e-minion-group-2982","csi-mock-csi-mock-volumes-8838":"csi-mock-csi-mock-volumes-8838","csi-mock-csi-mock-volumes-9268":"bootstrap-e2e-minion-group-2982"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 01:13:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 01:16:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 01:20:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-2982,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:19:16 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:19:16 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:19:16 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:19:16 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.83.251.2,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2982.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2982.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2696a1914e0c43baf9af45da97c22a96,SystemUUID:2696a191-4e0c-43ba-f9af-45da97c22a96,BootID:100bea17-3104-47ce-b900-733cee1dfe77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-7474^8eb7794d-6d25-11ed-9bf8-7ec81e6e10fe kubernetes.io/csi/csi-hostpath-provisioning-9114^19d7a84a-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19d993ab-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19dac6af-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19e2bbbd-6d26-11ed-82a4-7efb5be84aec],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19d7a84a-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19dac6af-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19d993ab-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7474^8eb7794d-6d25-11ed-9bf8-7ec81e6e10fe,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19e2bbbd-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},},Config:nil,},} Nov 26 01:20:46.770: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2982 Nov 26 01:20:46.825: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2982 Nov 26 01:20:46.938: INFO: ss-1 started at 2022-11-26 01:02:07 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container webserver ready: true, restart count 5 Nov 26 01:20:46.938: INFO: hostexec-bootstrap-e2e-minion-group-2982-xmc6r started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 01:20:46.938: INFO: pod-a9bf9170-0527-4b88-ab1c-09ab6058409d started at 2022-11-26 01:03:43 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:20:46.938: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:06:57 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.938: INFO: Container csi-attacher ready: false, restart count 6 Nov 26 01:20:46.938: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:20:46.938: INFO: Container csi-resizer ready: false, restart count 6 Nov 26 01:20:46.938: INFO: Container csi-snapshotter ready: false, restart count 6 Nov 26 01:20:46.938: INFO: Container hostpath ready: false, restart count 6 Nov 26 01:20:46.938: INFO: Container liveness-probe ready: false, restart count 6 Nov 26 01:20:46.938: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 26 01:20:46.938: INFO: pod-subpath-test-inlinevolume-wppj started at 2022-11-26 00:59:05 +0000 UTC (1+2 container statuses recorded) Nov 26 01:20:46.938: INFO: Init container init-volume-inlinevolume-wppj ready: true, restart count 0 Nov 26 01:20:46.938: INFO: Container test-container-subpath-inlinevolume-wppj ready: true, restart count 9 Nov 26 01:20:46.938: INFO: Container test-container-volume-inlinevolume-wppj ready: false, restart count 6 Nov 26 01:20:46.938: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:13:00 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.938: INFO: Container csi-attacher ready: false, restart count 3 Nov 26 01:20:46.938: INFO: Container csi-provisioner ready: false, restart count 3 Nov 26 01:20:46.938: INFO: Container csi-resizer ready: false, restart count 3 Nov 26 01:20:46.938: INFO: Container csi-snapshotter ready: false, restart count 3 Nov 26 01:20:46.938: INFO: Container hostpath ready: false, restart count 3 Nov 26 01:20:46.938: INFO: Container liveness-probe ready: false, restart count 3 Nov 26 01:20:46.938: INFO: Container node-driver-registrar ready: false, restart count 3 Nov 26 01:20:46.938: INFO: external-local-nodeport-hpnxr started at 2022-11-26 01:00:15 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container netexec ready: true, restart count 5 Nov 26 01:20:46.938: INFO: hostpath-3-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container hostpath-3-client ready: true, restart count 3 Nov 26 01:20:46.938: INFO: netserver-1 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container webserver ready: true, restart count 0 Nov 26 01:20:46.938: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.938: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container hostpath ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 01:20:46.938: INFO: back-off-cap started at 2022-11-26 01:08:51 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container back-off-cap ready: false, restart count 7 Nov 26 01:20:46.938: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:07 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.938: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 01:20:46.938: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:20:46.938: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 01:20:46.938: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 01:20:46.938: INFO: Container hostpath ready: true, restart count 5 Nov 26 01:20:46.938: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 01:20:46.938: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 01:20:46.938: INFO: kube-proxy-bootstrap-e2e-minion-group-2982 started at 2022-11-26 00:56:38 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container kube-proxy ready: false, restart count 7 Nov 26 01:20:46.938: INFO: pod-configmaps-0039d476-e3ec-4d1f-95a0-589475853cfc started at 2022-11-26 01:02:20 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:20:46.938: INFO: hostexec-bootstrap-e2e-minion-group-2982-262gq started at 2022-11-26 00:59:26 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:20:46.938: INFO: hostexec-bootstrap-e2e-minion-group-2982-xrccm started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 01:20:46.938: INFO: pod-bed0f594-e6f2-4d1d-b243-e6b3a7adfbf2 started at 2022-11-26 01:03:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:20:46.938: INFO: var-expansion-8d1d368e-67cd-4a67-b256-8d870f10a0e2 started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container dapi-container ready: false, restart count 0 Nov 26 01:20:46.938: INFO: hostexec-bootstrap-e2e-minion-group-2982-fm6cq started at 2022-11-26 01:03:21 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:20:46.938: INFO: metadata-proxy-v0.1-2rxjj started at 2022-11-26 00:56:38 +0000 UTC (0+2 container statuses recorded) Nov 26 01:20:46.938: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:20:46.938: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:20:46.938: INFO: external-local-update-rfn9p started at 2022-11-26 01:03:24 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container netexec ready: true, restart count 1 Nov 26 01:20:46.938: INFO: netserver-1 started at 2022-11-26 01:14:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container webserver ready: true, restart count 5 Nov 26 01:20:46.938: INFO: test-hostpath-type-9bw9n started at 2022-11-26 01:16:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 01:20:46.938: INFO: pod-subpath-test-preprovisionedpv-mkpm started at 2022-11-26 01:02:54 +0000 UTC (1+2 container statuses recorded) Nov 26 01:20:46.938: INFO: Init container init-volume-preprovisionedpv-mkpm ready: true, restart count 2 Nov 26 01:20:46.938: INFO: Container test-container-subpath-preprovisionedpv-mkpm ready: false, restart count 6 Nov 26 01:20:46.938: INFO: Container test-container-volume-preprovisionedpv-mkpm ready: true, restart count 6 Nov 26 01:20:46.938: INFO: hostpath-1-client started at 2022-11-26 01:03:13 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container hostpath-1-client ready: true, restart count 2 Nov 26 01:20:46.938: INFO: test-hostpath-type-lgxhw started at 2022-11-26 01:16:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container host-path-sh-testing ready: false, restart count 0 Nov 26 01:20:46.938: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:21 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.938: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container hostpath ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 01:20:46.938: INFO: pod-5be3eec2-e823-4f42-901c-fd502ef8f0d6 started at 2022-11-26 00:59:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:20:46.938: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:13:00 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.938: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 01:20:46.938: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 01:20:46.938: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 01:20:46.938: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 01:20:46.938: INFO: Container hostpath ready: true, restart count 2 Nov 26 01:20:46.938: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 01:20:46.938: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 01:20:46.938: INFO: hostpath-2-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container hostpath-2-client ready: true, restart count 2 Nov 26 01:20:46.938: INFO: netserver-1 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container webserver ready: true, restart count 6 Nov 26 01:20:46.938: INFO: hostpath-0-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container hostpath-0-client ready: true, restart count 4 Nov 26 01:20:46.938: INFO: hostexec-bootstrap-e2e-minion-group-2982-n9wzs started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 01:20:46.938: INFO: pod-subpath-test-inlinevolume-7tmj started at 2022-11-26 01:03:45 +0000 UTC (1+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Init container init-volume-inlinevolume-7tmj ready: true, restart count 0 Nov 26 01:20:46.938: INFO: Container test-container-subpath-inlinevolume-7tmj ready: false, restart count 0 Nov 26 01:20:46.938: INFO: lb-internal-8mn52 started at 2022-11-26 01:06:44 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container netexec ready: false, restart count 6 Nov 26 01:20:46.938: INFO: csi-mockplugin-0 started at 2022-11-26 01:13:39 +0000 UTC (0+4 container statuses recorded) Nov 26 01:20:46.938: INFO: Container busybox ready: true, restart count 4 Nov 26 01:20:46.938: INFO: Container csi-provisioner ready: false, restart count 4 Nov 26 01:20:46.938: INFO: Container driver-registrar ready: true, restart count 5 Nov 26 01:20:46.938: INFO: Container mock ready: true, restart count 5 Nov 26 01:20:46.938: INFO: hostexec-bootstrap-e2e-minion-group-2982-x689s started at 2022-11-26 01:13:50 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 01:20:46.938: INFO: test-container-pod started at 2022-11-26 01:15:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container webserver ready: true, restart count 0 Nov 26 01:20:46.938: INFO: test-container-pod started at 2022-11-26 01:16:01 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container webserver ready: true, restart count 1 Nov 26 01:20:46.938: INFO: external-provisioner-pm8mw started at 2022-11-26 01:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container nfs-provisioner ready: true, restart count 3 Nov 26 01:20:46.938: INFO: konnectivity-agent-kbwq2 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container konnectivity-agent ready: false, restart count 7 Nov 26 01:20:46.938: INFO: netserver-1 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container webserver ready: true, restart count 5 Nov 26 01:20:46.938: INFO: pod-subpath-test-preprovisionedpv-xdzr started at 2022-11-26 01:02:38 +0000 UTC (1+2 container statuses recorded) Nov 26 01:20:46.938: INFO: Init container init-volume-preprovisionedpv-xdzr ready: true, restart count 0 Nov 26 01:20:46.938: INFO: Container test-container-subpath-preprovisionedpv-xdzr ready: false, restart count 6 Nov 26 01:20:46.938: INFO: Container test-container-volume-preprovisionedpv-xdzr ready: false, restart count 6 Nov 26 01:20:46.938: INFO: csi-hostpathplugin-0 started at 2022-11-26 00:59:30 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:46.938: INFO: Container csi-attacher ready: true, restart count 6 Nov 26 01:20:46.938: INFO: Container csi-provisioner ready: true, restart count 6 Nov 26 01:20:46.938: INFO: Container csi-resizer ready: true, restart count 6 Nov 26 01:20:46.938: INFO: Container csi-snapshotter ready: true, restart count 6 Nov 26 01:20:46.938: INFO: Container hostpath ready: true, restart count 6 Nov 26 01:20:46.938: INFO: Container liveness-probe ready: true, restart count 6 Nov 26 01:20:46.938: INFO: Container node-driver-registrar ready: true, restart count 6 Nov 26 01:20:46.938: INFO: hostexec-bootstrap-e2e-minion-group-2982-p2ns7 started at 2022-11-26 00:59:16 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 01:20:46.938: INFO: csi-mockplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+3 container statuses recorded) Nov 26 01:20:46.938: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:20:46.938: INFO: Container driver-registrar ready: true, restart count 3 Nov 26 01:20:46.938: INFO: Container mock ready: true, restart count 3 Nov 26 01:20:46.938: INFO: ilb-host-exec started at 2022-11-26 01:12:53 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 01:20:46.938: INFO: host-test-container-pod started at 2022-11-26 01:15:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container agnhost-container ready: false, restart count 2 Nov 26 01:20:46.938: INFO: metrics-server-v0.5.2-867b8754b9-w4frb started at 2022-11-26 00:57:14 +0000 UTC (0+2 container statuses recorded) Nov 26 01:20:46.938: INFO: Container metrics-server ready: false, restart count 7 Nov 26 01:20:46.938: INFO: Container metrics-server-nanny ready: false, restart count 8 Nov 26 01:20:46.938: INFO: hostexec-bootstrap-e2e-minion-group-2982-kxg4f started at 2022-11-26 01:00:17 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container agnhost-container ready: false, restart count 3 Nov 26 01:20:46.938: INFO: hostexec-bootstrap-e2e-minion-group-2982-hqtxc started at 2022-11-26 01:02:39 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:20:46.938: INFO: pod-4db8d57c-3453-4b56-99f5-8158379eb684 started at 2022-11-26 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:20:46.938: INFO: test-hostpath-type-jf9w7 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:46.938: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 01:20:49.424: INFO: Latency metrics for node bootstrap-e2e-minion-group-2982 Nov 26 01:20:49.424: INFO: Logging node info for node bootstrap-e2e-minion-group-krkd Nov 26 01:20:49.483: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-krkd 793d73ff-a93b-4c26-a03e-336167d8e481 13086 0 2022-11-26 00:56:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-krkd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-krkd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-6742":"bootstrap-e2e-minion-group-krkd","csi-hostpath-provisioning-1838":"bootstrap-e2e-minion-group-krkd","csi-hostpath-volumemode-9128":"bootstrap-e2e-minion-group-krkd","csi-mock-csi-mock-volumes-4622":"bootstrap-e2e-minion-group-krkd"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 01:15:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 01:16:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 01:20:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-krkd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:15:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:15:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:15:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:15:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-krkd.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-krkd.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fdc8d24e89d871cca13350a32de1b46c,SystemUUID:fdc8d24e-89d8-71cc-a133-50a32de1b46c,BootID:14d1719a-3357-4298-85f2-160baff11885,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-1813^91a0fc90-6d25-11ed-88b9-c28a1eb064ec],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 01:20:49.483: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-krkd Nov 26 01:20:49.530: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-krkd Nov 26 01:20:49.594: INFO: ss-2 started at 2022-11-26 01:03:10 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.595: INFO: Container webserver ready: false, restart count 7 Nov 26 01:20:49.595: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+3 container statuses recorded) Nov 26 01:20:49.595: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 01:20:49.595: INFO: Container driver-registrar ready: false, restart count 5 Nov 26 01:20:49.595: INFO: Container mock ready: false, restart count 5 Nov 26 01:20:49.595: INFO: pvc-volume-tester-5lrn7 started at 2022-11-26 00:59:26 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.595: INFO: Container volume-tester ready: false, restart count 0 Nov 26 01:20:49.595: INFO: netserver-2 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.595: INFO: Container webserver ready: false, restart count 7 Nov 26 01:20:49.595: INFO: konnectivity-agent-qtkxb started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.595: INFO: Container konnectivity-agent ready: true, restart count 7 Nov 26 01:20:49.595: INFO: netserver-2 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.595: INFO: Container webserver ready: true, restart count 7 Nov 26 01:20:49.595: INFO: metadata-proxy-v0.1-qzrwj started at 2022-11-26 00:56:38 +0000 UTC (0+2 container statuses recorded) Nov 26 01:20:49.595: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:20:49.595: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:20:49.595: INFO: netserver-2 started at 2022-11-26 01:14:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.595: INFO: Container webserver ready: true, restart count 3 Nov 26 01:20:49.595: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:49.595: INFO: Container csi-attacher ready: true, restart count 7 Nov 26 01:20:49.595: INFO: Container csi-provisioner ready: true, restart count 7 Nov 26 01:20:49.595: INFO: Container csi-resizer ready: true, restart count 7 Nov 26 01:20:49.595: INFO: Container csi-snapshotter ready: true, restart count 7 Nov 26 01:20:49.595: INFO: Container hostpath ready: true, restart count 7 Nov 26 01:20:49.595: INFO: Container liveness-probe ready: true, restart count 7 Nov 26 01:20:49.595: INFO: Container node-driver-registrar ready: true, restart count 7 Nov 26 01:20:49.595: INFO: hostexec-bootstrap-e2e-minion-group-krkd-4bh2r started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.595: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:20:49.595: INFO: hostexec-bootstrap-e2e-minion-group-krkd-2wbgn started at 2022-11-26 01:01:34 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.595: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 01:20:49.595: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+3 container statuses recorded) Nov 26 01:20:49.595: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container driver-registrar ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container mock ready: true, restart count 5 Nov 26 01:20:49.595: INFO: pod-back-off-image started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.595: INFO: Container back-off ready: false, restart count 8 Nov 26 01:20:49.595: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:48 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:49.595: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container hostpath ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 01:20:49.595: INFO: pod-subpath-test-preprovisionedpv-snr7 started at 2022-11-26 00:59:30 +0000 UTC (1+2 container statuses recorded) Nov 26 01:20:49.595: INFO: Init container init-volume-preprovisionedpv-snr7 ready: true, restart count 6 Nov 26 01:20:49.595: INFO: Container test-container-subpath-preprovisionedpv-snr7 ready: true, restart count 8 Nov 26 01:20:49.595: INFO: Container test-container-volume-preprovisionedpv-snr7 ready: true, restart count 8 Nov 26 01:20:49.595: INFO: netserver-2 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.595: INFO: Container webserver ready: true, restart count 3 Nov 26 01:20:49.595: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+4 container statuses recorded) Nov 26 01:20:49.595: INFO: Container busybox ready: false, restart count 6 Nov 26 01:20:49.595: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:20:49.595: INFO: Container driver-registrar ready: true, restart count 8 Nov 26 01:20:49.595: INFO: Container mock ready: true, restart count 8 Nov 26 01:20:49.595: INFO: kube-proxy-bootstrap-e2e-minion-group-krkd started at 2022-11-26 00:56:37 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.595: INFO: Container kube-proxy ready: false, restart count 8 Nov 26 01:20:49.595: INFO: csi-hostpathplugin-0 started at 2022-11-26 00:59:51 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:49.595: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container hostpath ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 01:20:49.595: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 01:20:49.595: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:14:48 +0000 UTC (0+7 container statuses recorded) Nov 26 01:20:49.595: INFO: Container csi-attacher ready: false, restart count 3 Nov 26 01:20:49.595: INFO: Container csi-provisioner ready: false, restart count 3 Nov 26 01:20:49.595: INFO: Container csi-resizer ready: false, restart count 3 Nov 26 01:20:49.595: INFO: Container csi-snapshotter ready: false, restart count 3 Nov 26 01:20:49.595: INFO: Container hostpath ready: false, restart count 3 Nov 26 01:20:49.595: INFO: Container liveness-probe ready: false, restart count 3 Nov 26 01:20:49.595: INFO: Container node-driver-registrar ready: false, restart count 3 Nov 26 01:20:49.595: INFO: coredns-6d97d5ddb-bw2sm started at 2022-11-26 00:57:04 +0000 UTC (0+1 container statuses recorded) Nov 26 01:20:49.595: INFO: Container coredns ready: false, restart count 8 Nov 26 01:20:50.518: INFO: Latency metrics for node bootstrap-e2e-minion-group-krkd [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-1226" for this suite. 11/26/22 01:20:50.518
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\shandle\supdates\sto\sExternalTrafficPolicy\sfield$'
test/e2e/framework/pod/pod_client.go:99 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).Create(0xc000d57128, 0x66e0100?) test/e2e/framework/pod/pod_client.go:99 +0xe7 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createPod(0xc00026c8c0?, 0xc003a15ee0?) test/e2e/framework/network/utils.go:895 +0x6d k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc00026c8c0, {0x75c6f7c, 0x9}, 0xc003664780) test/e2e/framework/network/utils.go:859 +0x689 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc00026c8c0, 0x7eff2c6a0468?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc00026c8c0, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000d38000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 +0x417 There were additional failures detected after the initial failure: [FAILED] Nov 26 01:03:58.508: failed to list events in namespace "esipp-8571": Get "https://34.168.44.214/api/v1/namespaces/esipp-8571/events": dial tcp 34.168.44.214:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 01:03:58.548: Couldn't delete ns: "esipp-8571": Delete "https://34.168.44.214/api/v1/namespaces/esipp-8571": dial tcp 34.168.44.214:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.168.44.214/api/v1/namespaces/esipp-8571", Err:(*net.OpError)(0xc002fb1450)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:00:58.736 Nov 26 01:00:58.736: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 01:00:58.738 Nov 26 01:00:58.777: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:00.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:02.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:04.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:06.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:08.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:10.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:12.818: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:14.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:16.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:18.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:20.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:22.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:24.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:26.817: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 01:02:07.527 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 01:02:07.649 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should handle updates to ExternalTrafficPolicy field test/e2e/network/loadbalancer.go:1480 STEP: creating a service esipp-8571/external-local-update with type=LoadBalancer 11/26/22 01:02:08 STEP: setting ExternalTrafficPolicy=Local 11/26/22 01:02:08.001 STEP: waiting for loadbalancer for service esipp-8571/external-local-update 11/26/22 01:02:08.215 Nov 26 01:02:08.215: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: creating a pod to be part of the service external-local-update 11/26/22 01:03:24.342 Nov 26 01:03:24.412: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 01:03:24.479: INFO: Found 0/1 pods - will retry Nov 26 01:03:26.524: INFO: Found all 1 pods Nov 26 01:03:26.524: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-update-rfn9p] Nov 26 01:03:26.524: INFO: Waiting up to 2m0s for pod "external-local-update-rfn9p" in namespace "esipp-8571" to be "running and ready" Nov 26 01:03:26.583: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 59.828658ms Nov 26 01:03:26.584: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:28.659: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135302784s Nov 26 01:03:28.659: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:30.644: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120096574s Nov 26 01:03:30.644: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:32.667: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143707914s Nov 26 01:03:32.667: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:34.633: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109670098s Nov 26 01:03:34.633: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:36.667: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.143784351s Nov 26 01:03:36.667: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:38.640: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 12.115869469s Nov 26 01:03:38.640: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:40.659: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 14.134912979s Nov 26 01:03:40.659: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:42.639: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 16.115502497s Nov 26 01:03:42.639: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:44.656: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 18.13233879s Nov 26 01:03:44.656: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:46.631: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 20.107777588s Nov 26 01:03:46.631: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:48.652: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 22.128316817s Nov 26 01:03:48.652: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:50.682: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 24.158508212s Nov 26 01:03:50.682: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:52.668: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 26.143953308s Nov 26 01:03:52.668: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:54.649: INFO: Pod "external-local-update-rfn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 28.125135397s Nov 26 01:03:54.649: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-rfn9p' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:03:56.637: INFO: Pod "external-local-update-rfn9p": Phase="Running", Reason="", readiness=true. Elapsed: 30.113353393s Nov 26 01:03:56.637: INFO: Pod "external-local-update-rfn9p" satisfied condition "running and ready" Nov 26 01:03:56.637: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-update-rfn9p] STEP: waiting for loadbalancer for service esipp-8571/external-local-update 11/26/22 01:03:56.637 Nov 26 01:03:56.637: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: turning ESIPP off 11/26/22 01:03:56.701 STEP: Performing setup for networking test in namespace esipp-8571 11/26/22 01:03:57.981 STEP: creating a selector 11/26/22 01:03:57.981 STEP: Creating the service pods in kubernetes 11/26/22 01:03:57.981 Nov 26 01:03:57.981: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 01:03:58.256: INFO: Unexpected error: Error creating Pod: <*url.Error | 0xc003a50cf0>: { Op: "Post", URL: "https://34.168.44.214/api/v1/namespaces/esipp-8571/pods", Err: <*net.OpError | 0xc002fb1180>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003665d10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 168, 44, 214], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001347b60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 01:03:58.257: FAIL: Error creating Pod: Post "https://34.168.44.214/api/v1/namespaces/esipp-8571/pods": dial tcp 34.168.44.214:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).Create(0xc000d57128, 0x66e0100?) test/e2e/framework/pod/pod_client.go:99 +0xe7 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createPod(0xc00026c8c0?, 0xc003a15ee0?) test/e2e/framework/network/utils.go:895 +0x6d k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc00026c8c0, {0x75c6f7c, 0x9}, 0xc003664780) test/e2e/framework/network/utils.go:859 +0x689 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc00026c8c0, 0x7eff2c6a0468?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc00026c8c0, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000d38000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 +0x417 Nov 26 01:03:58.296: INFO: Unexpected error: <*errors.errorString | 0xc000f58a50>: { s: "failed to get Service \"external-local-update\": Get \"https://34.168.44.214/api/v1/namespaces/esipp-8571/services/external-local-update\": dial tcp 34.168.44.214:443: connect: connection refused", } Nov 26 01:03:58.296: FAIL: failed to get Service "external-local-update": Get "https://34.168.44.214/api/v1/namespaces/esipp-8571/services/external-local-update": dial tcp 34.168.44.214:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.7.1() test/e2e/network/loadbalancer.go:1495 +0xae panic({0x70eb7e0, 0xc0009895e0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00008efc0, 0x8b}, {0xc0011ef010?, 0xc0016eed80?, 0xc0011ef038?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc003a50cf0}, {0xc000049970?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).Create(0xc000d57128, 0x66e0100?) test/e2e/framework/pod/pod_client.go:99 +0xe7 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createPod(0xc00026c8c0?, 0xc003a15ee0?) test/e2e/framework/network/utils.go:895 +0x6d k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc00026c8c0, {0x75c6f7c, 0x9}, 0xc003664780) test/e2e/framework/network/utils.go:859 +0x689 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc00026c8c0, 0x7eff2c6a0468?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc00026c8c0, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000d38000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 +0x417 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 01:03:58.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 01:03:58.336: INFO: Output of kubectl describe svc: Nov 26 01:03:58.336: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-8571 describe svc --namespace=esipp-8571' Nov 26 01:03:58.468: INFO: rc: 1 Nov 26 01:03:58.468: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:03:58.468 STEP: Collecting events from namespace "esipp-8571". 11/26/22 01:03:58.468 Nov 26 01:03:58.507: INFO: Unexpected error: failed to list events in namespace "esipp-8571": <*url.Error | 0xc00123a1e0>: { Op: "Get", URL: "https://34.168.44.214/api/v1/namespaces/esipp-8571/events", Err: <*net.OpError | 0xc003052500>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0043b0fc0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 168, 44, 214], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00120c400>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 01:03:58.508: FAIL: failed to list events in namespace "esipp-8571": Get "https://34.168.44.214/api/v1/namespaces/esipp-8571/events": dial tcp 34.168.44.214:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0011ea5c0, {0xc002d8c010, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc00428bba0}, {0xc002d8c010, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0011ea650?, {0xc002d8c010?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000d38000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000e61570?, 0xc0042dcf50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0042dcf40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000e61570?, 0x2622c40?}, {0xae73300?, 0xc0042dcf80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-8571" for this suite. 11/26/22 01:03:58.508 Nov 26 01:03:58.548: FAIL: Couldn't delete ns: "esipp-8571": Delete "https://34.168.44.214/api/v1/namespaces/esipp-8571": dial tcp 34.168.44.214:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.168.44.214/api/v1/namespaces/esipp-8571", Err:(*net.OpError)(0xc002fb1450)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d38000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000e61450?, 0xc0042ddfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000e61450?, 0x0?}, {0xae73300?, 0x5?, 0xc0035ed4b8?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\sonly\starget\snodes\swith\sendpoints$'
test/e2e/framework/network/utils.go:834 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002ecd20, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc00115c000, {0x0, 0x0, 0xc003ba1350?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:14:01.482 Nov 26 01:14:01.482: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 01:14:01.485 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 01:14:01.665 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 01:14:01.756 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should only target nodes with endpoints test/e2e/network/loadbalancer.go:1346 STEP: creating a service esipp-2504/external-local-nodes with type=LoadBalancer 11/26/22 01:14:02.25 STEP: setting ExternalTrafficPolicy=Local 11/26/22 01:14:02.251 STEP: waiting for loadbalancer for service esipp-2504/external-local-nodes 11/26/22 01:14:02.41 Nov 26 01:14:02.410: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: waiting for loadbalancer for service esipp-2504/external-local-nodes 11/26/22 01:15:12.657 Nov 26 01:15:12.657: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: Performing setup for networking test in namespace esipp-2504 11/26/22 01:15:12.799 STEP: creating a selector 11/26/22 01:15:12.799 STEP: Creating the service pods in kubernetes 11/26/22 01:15:12.799 Nov 26 01:15:12.799: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 01:15:13.274: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-2504" to be "running and ready" Nov 26 01:15:13.329: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 54.728351ms Nov 26 01:15:13.329: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 01:15:15.444: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170492808s Nov 26 01:15:15.444: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 01:15:17.422: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147915556s Nov 26 01:15:17.422: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 01:15:19.404: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130347565s Nov 26 01:15:19.404: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 01:15:21.407: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.133663526s Nov 26 01:15:21.407: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 01:15:23.399: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.124840615s Nov 26 01:15:23.399: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 01:15:25.398: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.123976925s Nov 26 01:15:25.398: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 01:15:27.396: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.122386929s Nov 26 01:15:27.396: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 01:15:29.402: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.128283872s Nov 26 01:15:29.402: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 01:15:31.387: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.113644878s Nov 26 01:15:31.387: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 01:15:33.469: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.194915385s Nov 26 01:15:33.469: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 01:15:35.387: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.113067821s Nov 26 01:15:35.387: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 01:15:37.393: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.11896238s Nov 26 01:15:37.393: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 01:15:39.392: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 26.118469969s Nov 26 01:15:39.392: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 26 01:15:39.392: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 26 01:15:39.453: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-2504" to be "running and ready" Nov 26 01:15:39.543: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 89.409746ms Nov 26 01:15:39.543: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 01:15:41.599: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 2.145501986s Nov 26 01:15:41.599: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 26 01:15:41.599: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 26 01:15:41.655: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-2504" to be "running and ready" Nov 26 01:15:41.736: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 80.901919ms Nov 26 01:15:41.736: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 01:15:43.853: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2.197361104s Nov 26 01:15:43.853: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 01:15:45.802: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 4.146887432s Nov 26 01:15:45.802: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 01:15:47.832: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 6.176337652s Nov 26 01:15:47.832: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 01:15:49.793: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 8.13808609s Nov 26 01:15:49.794: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 01:15:51.799: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 10.143860234s Nov 26 01:15:51.799: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 01:15:53.872: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 12.21634787s Nov 26 01:15:53.872: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 01:15:55.792: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 14.136418201s Nov 26 01:15:55.792: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 01:15:57.802: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 16.146303161s Nov 26 01:15:57.802: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 01:15:59.778: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 18.12281659s Nov 26 01:15:59.778: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 01:16:01.781: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 20.126049927s Nov 26 01:16:01.781: INFO: The phase of Pod netserver-2 is Running (Ready = true) Nov 26 01:16:01.781: INFO: Pod "netserver-2" satisfied condition "running and ready" STEP: Creating test pods 11/26/22 01:16:01.822 Nov 26 01:16:01.904: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "esipp-2504" to be "running" Nov 26 01:16:01.944: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 40.255021ms Nov 26 01:16:03.990: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086606937s Nov 26 01:16:05.985: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081421242s Nov 26 01:16:07.988: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.084439412s Nov 26 01:16:07.988: INFO: Pod "test-container-pod" satisfied condition "running" Nov 26 01:16:08.043: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 STEP: Getting node addresses 11/26/22 01:16:08.043 Nov 26 01:16:08.044: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes 11/26/22 01:16:08.134 Nov 26 01:16:08.226: INFO: Service node-port-service in namespace esipp-2504 found. Nov 26 01:16:08.360: INFO: Service session-affinity-service in namespace esipp-2504 found. STEP: Waiting for NodePort service to expose endpoint 11/26/22 01:16:08.401 Nov 26 01:16:09.402: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:10.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:11.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:12.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:13.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:14.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:15.402: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:16.402: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:17.402: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:18.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:19.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:20.402: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:21.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:22.402: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:23.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:24.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:25.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:26.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:27.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:28.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:29.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:30.402: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:31.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:32.402: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:33.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:34.403: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:35.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:36.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:37.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:38.401: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:38.442: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 01:16:38.494: INFO: Unexpected error: failed to validate endpoints for service node-port-service in namespace: esipp-2504: <*errors.errorString | 0xc000215d70>: { s: "timed out waiting for the condition", } Nov 26 01:16:38.494: FAIL: failed to validate endpoints for service node-port-service in namespace: esipp-2504: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002ecd20, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc00115c000, {0x0, 0x0, 0xc003ba1350?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 Nov 26 01:16:38.587: INFO: Waiting up to 15m0s for service "external-local-nodes" to have no LoadBalancer [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 01:16:48.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 01:16:49.096: INFO: Output of kubectl describe svc: Nov 26 01:16:49.096: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-2504 describe svc --namespace=esipp-2504' Nov 26 01:16:49.906: INFO: stderr: "" Nov 26 01:16:49.906: INFO: stdout: "Name: external-local-nodes\nNamespace: esipp-2504\nLabels: testid=external-local-nodes-486e0488-148d-490c-b0e9-87d54599ae39\nAnnotations: <none>\nSelector: testid=external-local-nodes-486e0488-148d-490c-b0e9-87d54599ae39\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.127.136\nIPs: 10.0.127.136\nPort: <unset> 8081/TCP\nTargetPort: 80/TCP\nEndpoints: <none>\nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 2m15s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 98s service-controller Ensured load balancer\n Normal Type 11s service-controller LoadBalancer -> ClusterIP\n\n\nName: node-port-service\nNamespace: esipp-2504\nLabels: <none>\nAnnotations: <none>\nSelector: selector-ea00d8ab-5471-4aa5-b40f-29c72cb1b6ec=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.211.178\nIPs: 10.0.211.178\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 31283/TCP\nEndpoints: 10.64.0.168:8083,10.64.1.252:8083,10.64.3.203:8083\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 32592/UDP\nEndpoints: 10.64.0.168:8081,10.64.1.252:8081,10.64.3.203:8081\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: <none>\n\n\nName: session-affinity-service\nNamespace: esipp-2504\nLabels: <none>\nAnnotations: <none>\nSelector: selector-ea00d8ab-5471-4aa5-b40f-29c72cb1b6ec=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.209.47\nIPs: 10.0.209.47\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 32037/TCP\nEndpoints: 10.64.0.168:8083,10.64.1.252:8083,10.64.3.203:8083\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 31571/UDP\nEndpoints: 10.64.0.168:8081,10.64.1.252:8081,10.64.3.203:8081\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents: <none>\n" Nov 26 01:16:49.906: INFO: Name: external-local-nodes Namespace: esipp-2504 Labels: testid=external-local-nodes-486e0488-148d-490c-b0e9-87d54599ae39 Annotations: <none> Selector: testid=external-local-nodes-486e0488-148d-490c-b0e9-87d54599ae39 Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.127.136 IPs: 10.0.127.136 Port: <unset> 8081/TCP TargetPort: 80/TCP Endpoints: <none> Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 2m15s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 98s service-controller Ensured load balancer Normal Type 11s service-controller LoadBalancer -> ClusterIP Name: node-port-service Namespace: esipp-2504 Labels: <none> Annotations: <none> Selector: selector-ea00d8ab-5471-4aa5-b40f-29c72cb1b6ec=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.211.178 IPs: 10.0.211.178 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 31283/TCP Endpoints: 10.64.0.168:8083,10.64.1.252:8083,10.64.3.203:8083 Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 32592/UDP Endpoints: 10.64.0.168:8081,10.64.1.252:8081,10.64.3.203:8081 Session Affinity: None External Traffic Policy: Cluster Events: <none> Name: session-affinity-service Namespace: esipp-2504 Labels: <none> Annotations: <none> Selector: selector-ea00d8ab-5471-4aa5-b40f-29c72cb1b6ec=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.209.47 IPs: 10.0.209.47 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 32037/TCP Endpoints: 10.64.0.168:8083,10.64.1.252:8083,10.64.3.203:8083 Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 31571/UDP Endpoints: 10.64.0.168:8081,10.64.1.252:8081,10.64.3.203:8081 Session Affinity: ClientIP External Traffic Policy: Cluster Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:16:49.906 STEP: Collecting events from namespace "esipp-2504". 11/26/22 01:16:49.906 STEP: Found 23 events. 11/26/22 01:16:49.969 Nov 26 01:16:49.969: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned esipp-2504/netserver-0 to bootstrap-e2e-minion-group-0hjv Nov 26 01:16:49.969: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned esipp-2504/netserver-1 to bootstrap-e2e-minion-group-2982 Nov 26 01:16:49.969: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-2: { } Scheduled: Successfully assigned esipp-2504/netserver-2 to bootstrap-e2e-minion-group-krkd Nov 26 01:16:49.969: INFO: At 2022-11-26 01:14:34 +0000 UTC - event for external-local-nodes: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:11 +0000 UTC - event for external-local-nodes: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:19 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-0hjv} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:19 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-0hjv} Created: Created container webserver Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:19 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-0hjv} Started: Started container webserver Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:21 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-2982} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:21 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-2982} Created: Created container webserver Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:21 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-2982} Started: Started container webserver Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:21 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-krkd} Created: Created container webserver Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:21 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-krkd} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:21 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-krkd} Started: Started container webserver Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:22 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-0hjv} Killing: Stopping container webserver Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:22 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-krkd} Killing: Stopping container webserver Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:23 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-0hjv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:23 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-krkd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 01:16:49.969: INFO: At 2022-11-26 01:15:26 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-krkd} BackOff: Back-off restarting failed container webserver in pod netserver-2_esipp-2504(533055a4-c16f-4565-ba04-416ba1710264) Nov 26 01:16:49.969: INFO: At 2022-11-26 01:16:03 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-2982} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 01:16:49.969: INFO: At 2022-11-26 01:16:03 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-2982} Created: Created container webserver Nov 26 01:16:49.969: INFO: At 2022-11-26 01:16:03 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-2982} Started: Started container webserver Nov 26 01:16:49.969: INFO: At 2022-11-26 01:16:38 +0000 UTC - event for external-local-nodes: {service-controller } Type: LoadBalancer -> ClusterIP Nov 26 01:16:50.027: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 01:16:50.027: INFO: netserver-0 bootstrap-e2e-minion-group-0hjv Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:15:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:15:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:15:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:15:18 +0000 UTC }] Nov 26 01:16:50.027: INFO: netserver-1 bootstrap-e2e-minion-group-2982 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:15:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:15:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:15:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:15:18 +0000 UTC }] Nov 26 01:16:50.027: INFO: netserver-2 bootstrap-e2e-minion-group-krkd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:15:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:16:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:16:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:15:18 +0000 UTC }] Nov 26 01:16:50.027: INFO: test-container-pod bootstrap-e2e-minion-group-2982 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:16:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:16:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:16:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:16:01 +0000 UTC }] Nov 26 01:16:50.027: INFO: Nov 26 01:16:50.564: INFO: Logging node info for node bootstrap-e2e-master Nov 26 01:16:50.610: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master f052a6f7-0c51-4660-967d-6ec4c5208a42 8825 0 2022-11-26 00:56:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 01:12:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:12:20 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:12:20 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:12:20 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:12:20 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.44.214,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:df6bcb3c-a5ed-497f-83f2-74f13e952c28,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 01:16:50.611: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 01:16:50.667: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 01:16:50.754: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:50.754: INFO: Container kube-controller-manager ready: true, restart count 6 Nov 26 01:16:50.754: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 00:56:12 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:50.754: INFO: Container kube-addon-manager ready: true, restart count 2 Nov 26 01:16:50.754: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 00:56:13 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:50.754: INFO: Container l7-lb-controller ready: true, restart count 7 Nov 26 01:16:50.754: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:50.754: INFO: Container kube-apiserver ready: true, restart count 2 Nov 26 01:16:50.754: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:50.754: INFO: Container etcd-container ready: true, restart count 5 Nov 26 01:16:50.754: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:50.754: INFO: Container etcd-container ready: true, restart count 2 Nov 26 01:16:50.754: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:50.754: INFO: Container konnectivity-server-container ready: true, restart count 1 Nov 26 01:16:50.754: INFO: metadata-proxy-v0.1-8h6mf started at 2022-11-26 00:56:42 +0000 UTC (0+2 container statuses recorded) Nov 26 01:16:50.754: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:16:50.754: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:16:50.754: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:50.754: INFO: Container kube-scheduler ready: true, restart count 3 Nov 26 01:16:51.056: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 01:16:51.056: INFO: Logging node info for node bootstrap-e2e-minion-group-0hjv Nov 26 01:16:51.158: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0hjv aba0e90f-9c40-4934-aeed-e719199f0cec 12335 0 2022-11-26 00:56:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0hjv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-0hjv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-3787":"bootstrap-e2e-minion-group-0hjv","csi-hostpath-multivolume-8152":"bootstrap-e2e-minion-group-0hjv","csi-hostpath-provisioning-5652":"bootstrap-e2e-minion-group-0hjv"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 01:16:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 01:16:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {node-problem-detector Update v1 2022-11-26 01:16:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-0hjv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:47 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:16:13 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:16:13 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:16:13 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:16:13 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.247.74.12,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0hjv.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0hjv.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f702fe377ef6bb569afbb12e0158ab5,SystemUUID:7f702fe3-77ef-6bb5-69af-bb12e0158ab5,BootID:7bec61c0-e888-4acc-a61d-e6fb73a87068,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1487^b0f6a40b-6d25-11ed-9ba3-ceb25206bbbd],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1487^b0f6a40b-6d25-11ed-9ba3-ceb25206bbbd,DevicePath:,},},Config:nil,},} Nov 26 01:16:51.159: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0hjv Nov 26 01:16:51.251: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0hjv Nov 26 01:16:51.390: INFO: pod-configmaps-a8d056c0-ff53-45cb-8c13-ec73b1032b04 started at 2022-11-26 01:00:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.390: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:16:51.390: INFO: pod-d647abcb-295b-4ba3-bb3b-72f4c6f3de02 started at 2022-11-26 00:59:12 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.390: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:16:51.390: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-bkkbv started at 2022-11-26 01:03:25 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.390: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:16:51.390: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:12:52 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:51.391: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 01:16:51.391: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 01:16:51.391: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 01:16:51.391: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 01:16:51.391: INFO: Container hostpath ready: true, restart count 2 Nov 26 01:16:51.391: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 01:16:51.391: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 01:16:51.391: INFO: metadata-proxy-v0.1-8d7ds started at 2022-11-26 00:56:40 +0000 UTC (0+2 container statuses recorded) Nov 26 01:16:51.391: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:16:51.391: INFO: volume-snapshot-controller-0 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container volume-snapshot-controller ready: false, restart count 6 Nov 26 01:16:51.391: INFO: pod-subpath-test-dynamicpv-2vf4 started at 2022-11-26 01:00:19 +0000 UTC (1+2 container statuses recorded) Nov 26 01:16:51.391: INFO: Init container init-volume-dynamicpv-2vf4 ready: true, restart count 1 Nov 26 01:16:51.391: INFO: Container test-container-subpath-dynamicpv-2vf4 ready: false, restart count 3 Nov 26 01:16:51.391: INFO: Container test-container-volume-dynamicpv-2vf4 ready: false, restart count 3 Nov 26 01:16:51.391: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-kpcm8 started at 2022-11-26 00:59:55 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 01:16:51.391: INFO: netserver-0 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container webserver ready: true, restart count 6 Nov 26 01:16:51.391: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-ct8rx started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 01:16:51.391: INFO: pod-configmaps-cc7f33ac-2f26-44c6-ad1b-c8b91ecdfde7 started at 2022-11-26 01:02:24 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:16:51.391: INFO: l7-default-backend-8549d69d99-x8spc started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 01:16:51.391: INFO: netserver-0 started at 2022-11-26 01:14:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container webserver ready: true, restart count 2 Nov 26 01:16:51.391: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:15:34 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:51.391: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container hostpath ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 01:16:51.391: INFO: pod-subpath-test-inlinevolume-v5md started at 2022-11-26 01:00:23 +0000 UTC (1+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Init container init-volume-inlinevolume-v5md ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container test-container-subpath-inlinevolume-v5md ready: false, restart count 0 Nov 26 01:16:51.391: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:12:23 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:51.391: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container hostpath ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 01:16:51.391: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 01:16:51.391: INFO: coredns-6d97d5ddb-ghpwb started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container coredns ready: false, restart count 7 Nov 26 01:16:51.391: INFO: konnectivity-agent-4brl9 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container konnectivity-agent ready: true, restart count 7 Nov 26 01:16:51.391: INFO: netserver-0 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container webserver ready: true, restart count 4 Nov 26 01:16:51.391: INFO: httpd started at 2022-11-26 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container httpd ready: false, restart count 7 Nov 26 01:16:51.391: INFO: netserver-0 started at 2022-11-26 01:06:00 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container webserver ready: false, restart count 6 Nov 26 01:16:51.391: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-5md2t started at 2022-11-26 01:03:01 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:16:51.391: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:48 +0000 UTC (0+4 container statuses recorded) Nov 26 01:16:51.391: INFO: Container busybox ready: false, restart count 5 Nov 26 01:16:51.391: INFO: Container csi-provisioner ready: true, restart count 6 Nov 26 01:16:51.391: INFO: Container driver-registrar ready: false, restart count 6 Nov 26 01:16:51.391: INFO: Container mock ready: false, restart count 6 Nov 26 01:16:51.391: INFO: ss-0 started at 2022-11-26 01:00:02 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container webserver ready: false, restart count 6 Nov 26 01:16:51.391: INFO: lb-sourcerange-n4k92 started at 2022-11-26 01:00:10 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container netexec ready: true, restart count 7 Nov 26 01:16:51.391: INFO: execpod-dropdkfjx started at 2022-11-26 01:00:02 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container agnhost-container ready: false, restart count 4 Nov 26 01:16:51.391: INFO: kube-proxy-bootstrap-e2e-minion-group-0hjv started at 2022-11-26 00:56:39 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container kube-proxy ready: false, restart count 7 Nov 26 01:16:51.391: INFO: kube-dns-autoscaler-5f6455f985-2brqn started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container autoscaler ready: false, restart count 7 Nov 26 01:16:51.391: INFO: execpod-acceptfj5ts started at 2022-11-26 00:59:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:16:51.391: INFO: netserver-0 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:51.391: INFO: Container webserver ready: true, restart count 1 Nov 26 01:16:51.749: INFO: Latency metrics for node bootstrap-e2e-minion-group-0hjv Nov 26 01:16:51.749: INFO: Logging node info for node bootstrap-e2e-minion-group-2982 Nov 26 01:16:51.804: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2982 23ac061c-c1e5-4314-9c38-31fd0e0866cb 12316 0 2022-11-26 00:56:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2982 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-2982 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-3663":"bootstrap-e2e-minion-group-2982","csi-hostpath-multivolume-9512":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-2174":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-2301":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-9114":"bootstrap-e2e-minion-group-2982","csi-hostpath-volumemode-9250":"bootstrap-e2e-minion-group-2982","csi-mock-csi-mock-volumes-8838":"csi-mock-csi-mock-volumes-8838","csi-mock-csi-mock-volumes-9268":"bootstrap-e2e-minion-group-2982"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 01:13:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 01:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {node-problem-detector Update v1 2022-11-26 01:16:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-2982,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:16:45 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:14:10 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:14:10 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:14:10 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:14:10 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.83.251.2,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2982.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2982.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2696a1914e0c43baf9af45da97c22a96,SystemUUID:2696a191-4e0c-43ba-f9af-45da97c22a96,BootID:100bea17-3104-47ce-b900-733cee1dfe77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-7474^8eb7794d-6d25-11ed-9bf8-7ec81e6e10fe kubernetes.io/csi/csi-hostpath-provisioning-9114^19d7a84a-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19d993ab-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19dac6af-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19e2bbbd-6d26-11ed-82a4-7efb5be84aec],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19d7a84a-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19dac6af-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19d993ab-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7474^8eb7794d-6d25-11ed-9bf8-7ec81e6e10fe,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19e2bbbd-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},},Config:nil,},} Nov 26 01:16:51.805: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2982 Nov 26 01:16:51.872: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2982 Nov 26 01:16:52.125: INFO: hostexec-bootstrap-e2e-minion-group-2982-x689s started at 2022-11-26 01:13:50 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 01:16:52.125: INFO: hostexec-bootstrap-e2e-minion-group-2982-n9wzs started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 01:16:52.125: INFO: pod-subpath-test-inlinevolume-7tmj started at 2022-11-26 01:03:45 +0000 UTC (1+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Init container init-volume-inlinevolume-7tmj ready: true, restart count 0 Nov 26 01:16:52.125: INFO: Container test-container-subpath-inlinevolume-7tmj ready: false, restart count 0 Nov 26 01:16:52.125: INFO: lb-internal-8mn52 started at 2022-11-26 01:06:44 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container netexec ready: false, restart count 5 Nov 26 01:16:52.125: INFO: csi-mockplugin-0 started at 2022-11-26 01:13:39 +0000 UTC (0+4 container statuses recorded) Nov 26 01:16:52.125: INFO: Container busybox ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container csi-provisioner ready: false, restart count 3 Nov 26 01:16:52.125: INFO: Container driver-registrar ready: true, restart count 4 Nov 26 01:16:52.125: INFO: Container mock ready: true, restart count 4 Nov 26 01:16:52.125: INFO: test-container-pod started at 2022-11-26 01:15:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container webserver ready: true, restart count 0 Nov 26 01:16:52.125: INFO: test-container-pod started at 2022-11-26 01:16:01 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container webserver ready: true, restart count 0 Nov 26 01:16:52.125: INFO: external-provisioner-pm8mw started at 2022-11-26 01:16:49 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 26 01:16:52.125: INFO: konnectivity-agent-kbwq2 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container konnectivity-agent ready: false, restart count 6 Nov 26 01:16:52.125: INFO: netserver-1 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container webserver ready: true, restart count 5 Nov 26 01:16:52.125: INFO: pod-subpath-test-preprovisionedpv-xdzr started at 2022-11-26 01:02:38 +0000 UTC (1+2 container statuses recorded) Nov 26 01:16:52.125: INFO: Init container init-volume-preprovisionedpv-xdzr ready: true, restart count 0 Nov 26 01:16:52.125: INFO: Container test-container-subpath-preprovisionedpv-xdzr ready: false, restart count 5 Nov 26 01:16:52.125: INFO: Container test-container-volume-preprovisionedpv-xdzr ready: true, restart count 5 Nov 26 01:16:52.125: INFO: host-test-container-pod started at 2022-11-26 01:15:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 01:16:52.125: INFO: pod-subpath-test-preprovisionedpv-k9v5 started at 2022-11-26 01:16:46 +0000 UTC (1+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Init container init-volume-preprovisionedpv-k9v5 ready: true, restart count 0 Nov 26 01:16:52.125: INFO: Container test-container-subpath-preprovisionedpv-k9v5 ready: false, restart count 0 Nov 26 01:16:52.125: INFO: csi-hostpathplugin-0 started at 2022-11-26 00:59:30 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:52.125: INFO: Container csi-attacher ready: true, restart count 6 Nov 26 01:16:52.125: INFO: Container csi-provisioner ready: true, restart count 6 Nov 26 01:16:52.125: INFO: Container csi-resizer ready: true, restart count 6 Nov 26 01:16:52.125: INFO: Container csi-snapshotter ready: true, restart count 6 Nov 26 01:16:52.125: INFO: Container hostpath ready: true, restart count 6 Nov 26 01:16:52.125: INFO: Container liveness-probe ready: true, restart count 6 Nov 26 01:16:52.125: INFO: Container node-driver-registrar ready: true, restart count 6 Nov 26 01:16:52.125: INFO: hostexec-bootstrap-e2e-minion-group-2982-p2ns7 started at 2022-11-26 00:59:16 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 01:16:52.125: INFO: csi-mockplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+3 container statuses recorded) Nov 26 01:16:52.125: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container driver-registrar ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container mock ready: true, restart count 3 Nov 26 01:16:52.125: INFO: ilb-host-exec started at 2022-11-26 01:12:53 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 01:16:52.125: INFO: test-hostpath-type-jf9w7 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container host-path-testing ready: true, restart count 0 Nov 26 01:16:52.125: INFO: metrics-server-v0.5.2-867b8754b9-w4frb started at 2022-11-26 00:57:14 +0000 UTC (0+2 container statuses recorded) Nov 26 01:16:52.125: INFO: Container metrics-server ready: false, restart count 6 Nov 26 01:16:52.125: INFO: Container metrics-server-nanny ready: false, restart count 8 Nov 26 01:16:52.125: INFO: hostexec-bootstrap-e2e-minion-group-2982-kxg4f started at 2022-11-26 01:00:17 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:16:52.125: INFO: hostexec-bootstrap-e2e-minion-group-2982-hqtxc started at 2022-11-26 01:02:39 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 01:16:52.125: INFO: pod-4db8d57c-3453-4b56-99f5-8158379eb684 started at 2022-11-26 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:16:52.125: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:06:57 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:52.125: INFO: Container csi-attacher ready: false, restart count 5 Nov 26 01:16:52.125: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 01:16:52.125: INFO: Container csi-resizer ready: false, restart count 5 Nov 26 01:16:52.125: INFO: Container csi-snapshotter ready: false, restart count 5 Nov 26 01:16:52.125: INFO: Container hostpath ready: false, restart count 5 Nov 26 01:16:52.125: INFO: Container liveness-probe ready: false, restart count 5 Nov 26 01:16:52.125: INFO: Container node-driver-registrar ready: false, restart count 5 Nov 26 01:16:52.125: INFO: ss-1 started at 2022-11-26 01:02:07 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container webserver ready: true, restart count 5 Nov 26 01:16:52.125: INFO: hostexec-bootstrap-e2e-minion-group-2982-xmc6r started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 01:16:52.125: INFO: pod-a9bf9170-0527-4b88-ab1c-09ab6058409d started at 2022-11-26 01:03:43 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:16:52.125: INFO: pod-subpath-test-inlinevolume-wppj started at 2022-11-26 00:59:05 +0000 UTC (1+2 container statuses recorded) Nov 26 01:16:52.125: INFO: Init container init-volume-inlinevolume-wppj ready: true, restart count 0 Nov 26 01:16:52.125: INFO: Container test-container-subpath-inlinevolume-wppj ready: false, restart count 8 Nov 26 01:16:52.125: INFO: Container test-container-volume-inlinevolume-wppj ready: true, restart count 6 Nov 26 01:16:52.125: INFO: hostexec-bootstrap-e2e-minion-group-2982-6gm8d started at 2022-11-26 01:16:34 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: false, restart count 1 Nov 26 01:16:52.125: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:13:00 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:52.125: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container hostpath ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 01:16:52.125: INFO: external-local-nodeport-hpnxr started at 2022-11-26 01:00:15 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container netexec ready: true, restart count 5 Nov 26 01:16:52.125: INFO: hostpath-3-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container hostpath-3-client ready: true, restart count 2 Nov 26 01:16:52.125: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:07 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:52.125: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container hostpath ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 01:16:52.125: INFO: netserver-1 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container webserver ready: true, restart count 0 Nov 26 01:16:52.125: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:52.125: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container hostpath ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 01:16:52.125: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 01:16:52.125: INFO: back-off-cap started at 2022-11-26 01:08:51 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container back-off-cap ready: false, restart count 6 Nov 26 01:16:52.125: INFO: kube-proxy-bootstrap-e2e-minion-group-2982 started at 2022-11-26 00:56:38 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container kube-proxy ready: false, restart count 7 Nov 26 01:16:52.125: INFO: pod-configmaps-0039d476-e3ec-4d1f-95a0-589475853cfc started at 2022-11-26 01:02:20 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:16:52.125: INFO: hostexec-bootstrap-e2e-minion-group-2982-262gq started at 2022-11-26 00:59:26 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 01:16:52.125: INFO: hostexec-bootstrap-e2e-minion-group-2982-xrccm started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:16:52.125: INFO: pod-bed0f594-e6f2-4d1d-b243-e6b3a7adfbf2 started at 2022-11-26 01:03:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:16:52.125: INFO: var-expansion-8d1d368e-67cd-4a67-b256-8d870f10a0e2 started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container dapi-container ready: false, restart count 0 Nov 26 01:16:52.125: INFO: hostexec-bootstrap-e2e-minion-group-2982-fm6cq started at 2022-11-26 01:03:21 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:16:52.125: INFO: metadata-proxy-v0.1-2rxjj started at 2022-11-26 00:56:38 +0000 UTC (0+2 container statuses recorded) Nov 26 01:16:52.125: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:16:52.125: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:16:52.125: INFO: external-local-update-rfn9p started at 2022-11-26 01:03:24 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container netexec ready: true, restart count 1 Nov 26 01:16:52.125: INFO: netserver-1 started at 2022-11-26 01:14:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container webserver ready: false, restart count 3 Nov 26 01:16:52.125: INFO: pod-subpath-test-preprovisionedpv-mkpm started at 2022-11-26 01:02:54 +0000 UTC (1+2 container statuses recorded) Nov 26 01:16:52.125: INFO: Init container init-volume-preprovisionedpv-mkpm ready: true, restart count 2 Nov 26 01:16:52.125: INFO: Container test-container-subpath-preprovisionedpv-mkpm ready: false, restart count 6 Nov 26 01:16:52.125: INFO: Container test-container-volume-preprovisionedpv-mkpm ready: false, restart count 5 Nov 26 01:16:52.125: INFO: hostpath-1-client started at 2022-11-26 01:03:13 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container hostpath-1-client ready: true, restart count 2 Nov 26 01:16:52.125: INFO: pod-0ab584b9-2546-470c-bf48-4033e4c9d09c started at 2022-11-26 01:13:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:16:52.125: INFO: netserver-1 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container webserver ready: true, restart count 6 Nov 26 01:16:52.125: INFO: hostpath-0-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container hostpath-0-client ready: true, restart count 3 Nov 26 01:16:52.125: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:21 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:52.125: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 01:16:52.125: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 01:16:52.125: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 01:16:52.125: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 01:16:52.125: INFO: Container hostpath ready: true, restart count 4 Nov 26 01:16:52.125: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 01:16:52.125: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 01:16:52.125: INFO: pod-5be3eec2-e823-4f42-901c-fd502ef8f0d6 started at 2022-11-26 00:59:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:16:52.125: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:13:00 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:52.125: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 01:16:52.125: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 01:16:52.125: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 01:16:52.125: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 01:16:52.125: INFO: Container hostpath ready: true, restart count 0 Nov 26 01:16:52.125: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 01:16:52.125: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 01:16:52.125: INFO: hostpath-2-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:52.125: INFO: Container hostpath-2-client ready: true, restart count 2 Nov 26 01:16:52.883: INFO: Latency metrics for node bootstrap-e2e-minion-group-2982 Nov 26 01:16:52.883: INFO: Logging node info for node bootstrap-e2e-minion-group-krkd Nov 26 01:16:52.969: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-krkd 793d73ff-a93b-4c26-a03e-336167d8e481 12351 0 2022-11-26 00:56:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-krkd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-krkd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2415":"bootstrap-e2e-minion-group-krkd","csi-hostpath-volumemode-9128":"bootstrap-e2e-minion-group-krkd","csi-mock-csi-mock-volumes-1813":"bootstrap-e2e-minion-group-krkd","csi-mock-csi-mock-volumes-4622":"bootstrap-e2e-minion-group-krkd"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 01:15:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 01:16:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 01:16:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-krkd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:16:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:15:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:15:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:15:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:15:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-krkd.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-krkd.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fdc8d24e89d871cca13350a32de1b46c,SystemUUID:fdc8d24e-89d8-71cc-a133-50a32de1b46c,BootID:14d1719a-3357-4298-85f2-160baff11885,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-1813^91a0fc90-6d25-11ed-88b9-c28a1eb064ec],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 01:16:52.970: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-krkd Nov 26 01:16:53.103: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-krkd Nov 26 01:16:53.248: INFO: coredns-6d97d5ddb-bw2sm started at 2022-11-26 00:57:04 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:53.248: INFO: Container coredns ready: false, restart count 8 Nov 26 01:16:53.248: INFO: csi-hostpathplugin-0 started at 2022-11-26 00:59:51 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:53.248: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 01:16:53.248: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:16:53.248: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 01:16:53.248: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 01:16:53.248: INFO: Container hostpath ready: true, restart count 3 Nov 26 01:16:53.248: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 01:16:53.248: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 01:16:53.248: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:14:48 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:53.248: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 01:16:53.248: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:16:53.248: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 01:16:53.248: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 01:16:53.248: INFO: Container hostpath ready: true, restart count 3 Nov 26 01:16:53.248: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 01:16:53.248: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 01:16:53.248: INFO: netserver-2 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:53.248: INFO: Container webserver ready: false, restart count 6 Nov 26 01:16:53.248: INFO: konnectivity-agent-qtkxb started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:53.248: INFO: Container konnectivity-agent ready: false, restart count 6 Nov 26 01:16:53.248: INFO: ss-2 started at 2022-11-26 01:03:10 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:53.248: INFO: Container webserver ready: false, restart count 6 Nov 26 01:16:53.248: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+3 container statuses recorded) Nov 26 01:16:53.248: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:16:53.248: INFO: Container driver-registrar ready: true, restart count 5 Nov 26 01:16:53.248: INFO: Container mock ready: true, restart count 5 Nov 26 01:16:53.248: INFO: pvc-volume-tester-5lrn7 started at 2022-11-26 00:59:26 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:53.248: INFO: Container volume-tester ready: false, restart count 0 Nov 26 01:16:53.248: INFO: metadata-proxy-v0.1-qzrwj started at 2022-11-26 00:56:38 +0000 UTC (0+2 container statuses recorded) Nov 26 01:16:53.248: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:16:53.248: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:16:53.248: INFO: netserver-2 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:53.248: INFO: Container webserver ready: false, restart count 6 Nov 26 01:16:53.248: INFO: netserver-2 started at 2022-11-26 01:14:57 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:53.248: INFO: Container webserver ready: false, restart count 2 Nov 26 01:16:53.248: INFO: hostexec-bootstrap-e2e-minion-group-krkd-2wbgn started at 2022-11-26 01:01:34 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:53.248: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 01:16:53.248: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:53.248: INFO: Container csi-attacher ready: false, restart count 6 Nov 26 01:16:53.248: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:16:53.248: INFO: Container csi-resizer ready: false, restart count 6 Nov 26 01:16:53.248: INFO: Container csi-snapshotter ready: false, restart count 6 Nov 26 01:16:53.248: INFO: Container hostpath ready: false, restart count 6 Nov 26 01:16:53.248: INFO: Container liveness-probe ready: false, restart count 6 Nov 26 01:16:53.248: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 26 01:16:53.248: INFO: hostexec-bootstrap-e2e-minion-group-krkd-4bh2r started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:53.248: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:16:53.248: INFO: pod-subpath-test-preprovisionedpv-snr7 started at 2022-11-26 00:59:30 +0000 UTC (1+2 container statuses recorded) Nov 26 01:16:53.248: INFO: Init container init-volume-preprovisionedpv-snr7 ready: true, restart count 6 Nov 26 01:16:53.248: INFO: Container test-container-subpath-preprovisionedpv-snr7 ready: false, restart count 7 Nov 26 01:16:53.248: INFO: Container test-container-volume-preprovisionedpv-snr7 ready: false, restart count 7 Nov 26 01:16:53.248: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+3 container statuses recorded) Nov 26 01:16:53.248: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:16:53.248: INFO: Container driver-registrar ready: true, restart count 5 Nov 26 01:16:53.248: INFO: Container mock ready: true, restart count 5 Nov 26 01:16:53.248: INFO: pod-back-off-image started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:53.248: INFO: Container back-off ready: false, restart count 7 Nov 26 01:16:53.248: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:48 +0000 UTC (0+7 container statuses recorded) Nov 26 01:16:53.248: INFO: Container csi-attacher ready: false, restart count 4 Nov 26 01:16:53.248: INFO: Container csi-provisioner ready: false, restart count 4 Nov 26 01:16:53.248: INFO: Container csi-resizer ready: false, restart count 4 Nov 26 01:16:53.248: INFO: Container csi-snapshotter ready: false, restart count 4 Nov 26 01:16:53.248: INFO: Container hostpath ready: false, restart count 4 Nov 26 01:16:53.248: INFO: Container liveness-probe ready: false, restart count 4 Nov 26 01:16:53.248: INFO: Container node-driver-registrar ready: false, restart count 4 Nov 26 01:16:53.248: INFO: kube-proxy-bootstrap-e2e-minion-group-krkd started at 2022-11-26 00:56:37 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:53.248: INFO: Container kube-proxy ready: false, restart count 7 Nov 26 01:16:53.248: INFO: netserver-2 started at 2022-11-26 01:15:18 +0000 UTC (0+1 container statuses recorded) Nov 26 01:16:53.248: INFO: Container webserver ready: true, restart count 3 Nov 26 01:16:53.248: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+4 container statuses recorded) Nov 26 01:16:53.248: INFO: Container busybox ready: false, restart count 5 Nov 26 01:16:53.248: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 01:16:53.248: INFO: Container driver-registrar ready: false, restart count 7 Nov 26 01:16:53.248: INFO: Container mock ready: false, restart count 7 Nov 26 01:16:53.622: INFO: Latency metrics for node bootstrap-e2e-minion-group-krkd [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-2504" for this suite. 11/26/22 01:16:53.622
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=LoadBalancer$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00115c000) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:09:03.788 Nov 26 01:09:03.788: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 01:09:03.79 Nov 26 01:11:03.836: INFO: Unexpected error: <*fmt.wrapError | 0xc000682480>: { msg: "wait for service account \"default\" in namespace \"esipp-2516\": timed out waiting for the condition", err: <*errors.errorString | 0xc000215d70>{ s: "timed out waiting for the condition", }, } Nov 26 01:11:03.837: FAIL: wait for service account "default" in namespace "esipp-2516": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00115c000) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 01:11:03.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:11:03.881 STEP: Collecting events from namespace "esipp-2516". 11/26/22 01:11:03.881 STEP: Found 0 events. 11/26/22 01:11:03.922 Nov 26 01:11:03.963: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 01:11:03.963: INFO: Nov 26 01:11:04.006: INFO: Logging node info for node bootstrap-e2e-master Nov 26 01:11:04.050: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master f052a6f7-0c51-4660-967d-6ec4c5208a42 6357 0 2022-11-26 00:56:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 01:07:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:07:14 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:07:14 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:07:14 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:07:14 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.44.214,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:df6bcb3c-a5ed-497f-83f2-74f13e952c28,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 01:11:04.050: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 01:11:04.105: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 01:11:04.196: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 26 01:11:04.196: INFO: Logging node info for node bootstrap-e2e-minion-group-0hjv Nov 26 01:11:04.274: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0hjv aba0e90f-9c40-4934-aeed-e719199f0cec 8310 0 2022-11-26 00:56:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0hjv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-0hjv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1487":"bootstrap-e2e-minion-group-0hjv","csi-mock-csi-mock-volumes-2873":"bootstrap-e2e-minion-group-0hjv"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 01:06:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 01:07:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 01:10:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-0hjv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:06:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:06:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:06:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:06:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:06:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:06:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:06:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:07:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:07:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:07:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:07:51 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.247.74.12,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0hjv.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0hjv.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f702fe377ef6bb569afbb12e0158ab5,SystemUUID:7f702fe3-77ef-6bb5-69af-bb12e0158ab5,BootID:7bec61c0-e888-4acc-a61d-e6fb73a87068,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1487^b0f6a40b-6d25-11ed-9ba3-ceb25206bbbd],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1487^b0f6a40b-6d25-11ed-9ba3-ceb25206bbbd,DevicePath:,},},Config:nil,},} Nov 26 01:11:04.274: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0hjv Nov 26 01:11:04.343: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0hjv Nov 26 01:11:04.404: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-0hjv: error trying to reach service: No agent available Nov 26 01:11:04.404: INFO: Logging node info for node bootstrap-e2e-minion-group-2982 Nov 26 01:11:04.446: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2982 23ac061c-c1e5-4314-9c38-31fd0e0866cb 8286 0 2022-11-26 00:56:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2982 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-2982 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-9512":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-2174":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-7474":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-8735":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-9114":"bootstrap-e2e-minion-group-2982","csi-mock-csi-mock-volumes-9268":"bootstrap-e2e-minion-group-2982"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 01:06:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 01:08:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 01:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-2982,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:09:34 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:09:34 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:09:34 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:09:34 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.83.251.2,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2982.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2982.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2696a1914e0c43baf9af45da97c22a96,SystemUUID:2696a191-4e0c-43ba-f9af-45da97c22a96,BootID:100bea17-3104-47ce-b900-733cee1dfe77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-7474^8eb7794d-6d25-11ed-9bf8-7ec81e6e10fe kubernetes.io/csi/csi-hostpath-provisioning-9114^19d7a84a-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19d993ab-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19dac6af-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19e2bbbd-6d26-11ed-82a4-7efb5be84aec],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19d993ab-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-9512^d5c222b4-6d26-11ed-9bbd-2e8d6e190944,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19d7a84a-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7474^8eb7794d-6d25-11ed-9bf8-7ec81e6e10fe,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19dac6af-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19e2bbbd-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2174^ccdb932e-6d26-11ed-939f-e67c3ef93248,DevicePath:,},},Config:nil,},} Nov 26 01:11:04.447: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2982 Nov 26 01:11:04.504: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2982 Nov 26 01:11:04.546: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-2982: error trying to reach service: No agent available Nov 26 01:11:04.546: INFO: Logging node info for node bootstrap-e2e-minion-group-krkd Nov 26 01:11:04.588: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-krkd 793d73ff-a93b-4c26-a03e-336167d8e481 8263 0 2022-11-26 00:56:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-krkd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-krkd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-6742":"bootstrap-e2e-minion-group-krkd","csi-hostpath-provisioning-1838":"bootstrap-e2e-minion-group-krkd","csi-hostpath-volumemode-9128":"bootstrap-e2e-minion-group-krkd","csi-mock-csi-mock-volumes-1009":"bootstrap-e2e-minion-group-krkd","csi-mock-csi-mock-volumes-1813":"bootstrap-e2e-minion-group-krkd","csi-mock-csi-mock-volumes-4622":"bootstrap-e2e-minion-group-krkd"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-26 01:06:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 01:08:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 01:10:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-krkd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:06:44 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:10:23 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:10:23 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:10:23 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:10:23 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-krkd.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-krkd.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fdc8d24e89d871cca13350a32de1b46c,SystemUUID:fdc8d24e-89d8-71cc-a133-50a32de1b46c,BootID:14d1719a-3357-4298-85f2-160baff11885,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-1813^91a0fc90-6d25-11ed-88b9-c28a1eb064ec],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1838^e33c27b2-6d26-11ed-b2e9-e2ad776514fe,DevicePath:,},},Config:nil,},} Nov 26 01:11:04.589: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-krkd Nov 26 01:11:04.632: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-krkd Nov 26 01:11:04.674: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-krkd: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-2516" for this suite. 11/26/22 01:11:04.674
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=NodePort$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0036556c0, {0x75c6f7c, 0x9}, 0xc004fab8f0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0036556c0, 0x7f2b825129f0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0036556c0, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000a32000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 There were additional failures detected after the initial failure: [FAILED] Nov 26 01:00:57.436: failed to list events in namespace "esipp-4340": Get "https://34.168.44.214/api/v1/namespaces/esipp-4340/events": dial tcp 34.168.44.214:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 01:00:57.476: Couldn't delete ns: "esipp-4340": Delete "https://34.168.44.214/api/v1/namespaces/esipp-4340": dial tcp 34.168.44.214:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.168.44.214/api/v1/namespaces/esipp-4340", Err:(*net.OpError)(0xc003be74a0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:00:14.207 Nov 26 01:00:14.207: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 01:00:14.209 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 01:00:14.427 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 01:00:14.515 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=NodePort test/e2e/network/loadbalancer.go:1314 STEP: creating a service esipp-4340/external-local-nodeport with type=NodePort and ExternalTrafficPolicy=Local 11/26/22 01:00:14.737 STEP: creating a pod to be part of the service external-local-nodeport 11/26/22 01:00:14.876 Nov 26 01:00:14.937: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 01:00:15.058: INFO: Found all 1 pods Nov 26 01:00:15.058: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-nodeport-hpnxr] Nov 26 01:00:15.058: INFO: Waiting up to 2m0s for pod "external-local-nodeport-hpnxr" in namespace "esipp-4340" to be "running and ready" Nov 26 01:00:15.174: INFO: Pod "external-local-nodeport-hpnxr": Phase="Pending", Reason="", readiness=false. Elapsed: 115.439153ms Nov 26 01:00:15.174: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-hpnxr' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:00:17.231: INFO: Pod "external-local-nodeport-hpnxr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172673553s Nov 26 01:00:17.231: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-hpnxr' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:00:19.329: INFO: Pod "external-local-nodeport-hpnxr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270648873s Nov 26 01:00:19.329: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-hpnxr' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:00:21.237: INFO: Pod "external-local-nodeport-hpnxr": Phase="Running", Reason="", readiness=true. Elapsed: 6.178759666s Nov 26 01:00:21.237: INFO: Pod "external-local-nodeport-hpnxr" satisfied condition "running and ready" Nov 26 01:00:21.237: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-nodeport-hpnxr] STEP: Performing setup for networking test in namespace esipp-4340 11/26/22 01:00:22.402 STEP: creating a selector 11/26/22 01:00:22.402 STEP: Creating the service pods in kubernetes 11/26/22 01:00:22.402 Nov 26 01:00:22.402: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 01:00:22.897: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-4340" to be "running and ready" Nov 26 01:00:23.032: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 135.230906ms Nov 26 01:00:23.032: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 01:00:25.104: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207519718s Nov 26 01:00:25.104: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 01:00:27.105: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208641139s Nov 26 01:00:27.105: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 01:00:52.019: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 29.122109119s Nov 26 01:00:52.019: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 01:00:53.076: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.178883162s Nov 26 01:00:53.076: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 01:00:55.078: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 32.18131868s Nov 26 01:00:55.078: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 26 01:00:55.078: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 26 01:00:55.119: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-4340" to be "running and ready" Nov 26 01:00:55.160: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 41.017999ms Nov 26 01:00:55.160: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 01:00:57.199: INFO: Encountered non-retryable error while getting pod esipp-4340/netserver-1: Get "https://34.168.44.214/api/v1/namespaces/esipp-4340/pods/netserver-1": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:00:57.200: INFO: Unexpected error: <*fmt.wrapError | 0xc003bec0a0>: { msg: "error while waiting for pod esipp-4340/netserver-1 to be running and ready: Get \"https://34.168.44.214/api/v1/namespaces/esipp-4340/pods/netserver-1\": dial tcp 34.168.44.214:443: connect: connection refused", err: <*url.Error | 0xc004e70150>{ Op: "Get", URL: "https://34.168.44.214/api/v1/namespaces/esipp-4340/pods/netserver-1", Err: <*net.OpError | 0xc00302a140>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004eec900>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 168, 44, 214], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003bec040>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 26 01:00:57.200: FAIL: error while waiting for pod esipp-4340/netserver-1 to be running and ready: Get "https://34.168.44.214/api/v1/namespaces/esipp-4340/pods/netserver-1": dial tcp 34.168.44.214:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0036556c0, {0x75c6f7c, 0x9}, 0xc004fab8f0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0036556c0, 0x7f2b825129f0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0036556c0, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000a32000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 Nov 26 01:00:57.239: INFO: Unexpected error: <*url.Error | 0xc004d99440>: { Op: "Delete", URL: "https://34.168.44.214/api/v1/namespaces/esipp-4340/services/external-local-nodeport", Err: <*net.OpError | 0xc003be7040>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004e70510>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 168, 44, 214], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0001e8540>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 01:00:57.239: FAIL: Delete "https://34.168.44.214/api/v1/namespaces/esipp-4340/services/external-local-nodeport": dial tcp 34.168.44.214:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.4.1() test/e2e/network/loadbalancer.go:1323 +0xe7 panic({0x70eb7e0, 0xc004efc620}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00367c000, 0xce}, {0xc0044537c0?, 0xc00367c000?, 0xc0044537e8?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3f20, 0xc003bec0a0}, {0x0?, 0xc004e433b0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0036556c0, {0x75c6f7c, 0x9}, 0xc004fab8f0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0036556c0, 0x7f2b825129f0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0036556c0, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000a32000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 01:00:57.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 01:00:57.279: INFO: Output of kubectl describe svc: Nov 26 01:00:57.279: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-4340 describe svc --namespace=esipp-4340' Nov 26 01:00:57.397: INFO: rc: 1 Nov 26 01:00:57.397: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:00:57.397 STEP: Collecting events from namespace "esipp-4340". 11/26/22 01:00:57.397 Nov 26 01:00:57.436: INFO: Unexpected error: failed to list events in namespace "esipp-4340": <*url.Error | 0xc004e70540>: { Op: "Get", URL: "https://34.168.44.214/api/v1/namespaces/esipp-4340/events", Err: <*net.OpError | 0xc00302a230>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004d99e00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 168, 44, 214], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003bec260>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 01:00:57.436: FAIL: failed to list events in namespace "esipp-4340": Get "https://34.168.44.214/api/v1/namespaces/esipp-4340/events": dial tcp 34.168.44.214:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00444e5c0, {0xc004e433b0, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc00247e820}, {0xc004e433b0, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00444e650?, {0xc004e433b0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000a32000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00315fce0?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00315fce0?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-4340" for this suite. 11/26/22 01:00:57.437 Nov 26 01:00:57.476: FAIL: Couldn't delete ns: "esipp-4340": Delete "https://34.168.44.214/api/v1/namespaces/esipp-4340": dial tcp 34.168.44.214:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.168.44.214/api/v1/namespaces/esipp-4340", Err:(*net.OpError)(0xc003be74a0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000a32000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00315fc60?, 0xc00432ffb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00315fc60?, 0x0?}, {0xae73300?, 0x5?, 0xc004e8a1e0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfrom\spods$'
test/e2e/network/loadbalancer.go:1476 k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1476 +0xabdfrom junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:16:16.76 Nov 26 01:16:16.760: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 01:16:16.762 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 01:16:16.886 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 01:16:16.966 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work from pods test/e2e/network/loadbalancer.go:1422 STEP: creating a service esipp-9254/external-local-pods with type=LoadBalancer 11/26/22 01:16:17.14 STEP: setting ExternalTrafficPolicy=Local 11/26/22 01:16:17.14 STEP: waiting for loadbalancer for service esipp-9254/external-local-pods 11/26/22 01:16:17.191 Nov 26 01:16:17.192: INFO: Waiting up to 15m0s for service "external-local-pods" to have a LoadBalancer Nov 26 01:17:57.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:17:59.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:01.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:03.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:05.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:07.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:09.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:11.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:13.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:15.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:17.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:19.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:21.271: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:23.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:25.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:27.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:29.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:31.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:33.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:35.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:37.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:18:39.272: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://34.168.44.214/api/v1/namespaces/esipp-9254/services/external-local-pods": dial tcp 34.168.44.214:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 5m0.381s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-9254/external-local-pods (Step Runtime: 4m59.95s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3068, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc001d0daa0?, 0xc002cc7a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00152e0a0?, 0x7fa7740?, 0xc0001cc640?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ff0500, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ff0500, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ff0500, 0x6aba880?, 0xc002cc7cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ff0500, 0xc001e6c000?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 5m20.384s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 5m20.004s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-9254/external-local-pods (Step Runtime: 5m19.953s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3068, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc001d0daa0?, 0xc002cc7a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00152e0a0?, 0x7fa7740?, 0xc0001cc640?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ff0500, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ff0500, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ff0500, 0x6aba880?, 0xc002cc7cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ff0500, 0xc001e6c000?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 5m40.387s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 5m40.007s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-9254/external-local-pods (Step Runtime: 5m39.956s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3068, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc001d0daa0?, 0xc002cc7a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00152e0a0?, 0x7fa7740?, 0xc0001cc640?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ff0500, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ff0500, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ff0500, 0x6aba880?, 0xc002cc7cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ff0500, 0xc001e6c000?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 6m0.39s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 6m0.009s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-9254/external-local-pods (Step Runtime: 5m59.958s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3068, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc001d0daa0?, 0xc002cc7a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00152e0a0?, 0x7fa7740?, 0xc0001cc640?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ff0500, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ff0500, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ff0500, 0x6aba880?, 0xc002cc7cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ff0500, 0xc001e6c000?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 6m20.392s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 6m20.011s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-9254/external-local-pods (Step Runtime: 6m19.96s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3068, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0xc001d0daa0?, 0xc002cc7a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00152e0a0?, 0x7fa7740?, 0xc0001cc640?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ff0500, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ff0500, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ff0500, 0x6aba880?, 0xc002cc7cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ff0500, 0xc001e6c000?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ STEP: creating a pod to be part of the service external-local-pods 11/26/22 01:22:47.291 Nov 26 01:22:47.375: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 01:22:47.429: INFO: Found all 1 pods Nov 26 01:22:47.430: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-pods-tbf9t] Nov 26 01:22:47.430: INFO: Waiting up to 2m0s for pod "external-local-pods-tbf9t" in namespace "esipp-9254" to be "running and ready" Nov 26 01:22:47.488: INFO: Pod "external-local-pods-tbf9t": Phase="Pending", Reason="", readiness=false. Elapsed: 58.189185ms Nov 26 01:22:47.488: INFO: Error evaluating pod condition running and ready: want pod 'external-local-pods-tbf9t' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:22:49.585: INFO: Pod "external-local-pods-tbf9t": Phase="Running", Reason="", readiness=true. Elapsed: 2.155366017s Nov 26 01:22:49.585: INFO: Pod "external-local-pods-tbf9t" satisfied condition "running and ready" Nov 26 01:22:49.585: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-pods-tbf9t] STEP: waiting for loadbalancer for service esipp-9254/external-local-pods 11/26/22 01:22:49.585 Nov 26 01:22:49.585: INFO: Waiting up to 15m0s for service "external-local-pods" to have a LoadBalancer STEP: Creating pause pod deployment to make sure, pausePods are in desired state 11/26/22 01:22:49.663 Nov 26 01:22:49.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} Nov 26 01:22:51.936: INFO: Waiting up to 5m0s curl 34.168.78.235:80/clientip STEP: Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 11/26/22 01:22:52.024 Nov 26 01:22:52.024: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:22:53.056: INFO: rc: 7 Nov 26 01:22:53.056: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:22:55.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:22:55.803: INFO: rc: 7 Nov 26 01:22:55.803: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:22:57.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 6m40.393s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 6m40.013s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 5.129s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0016afb80?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:22:57.743: INFO: rc: 7 Nov 26 01:22:57.743: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:22:59.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:22:59.832: INFO: rc: 7 Nov 26 01:22:59.832: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:23:01.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:23:01.797: INFO: rc: 7 Nov 26 01:23:01.797: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:23:03.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:23:03.856: INFO: rc: 7 Nov 26 01:23:03.856: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:23:05.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:23:05.702: INFO: rc: 7 Nov 26 01:23:05.702: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:23:07.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:23:07.848: INFO: rc: 7 Nov 26 01:23:07.849: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:23:09.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:23:09.725: INFO: rc: 7 Nov 26 01:23:09.725: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:23:11.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 7m0.396s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 7m0.015s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 25.132s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000a7c000?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 7m20.398s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 7m20.018s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 45.134s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000a7c000?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:23:41.484: INFO: rc: 1 Nov 26 01:23:41.484: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" error: exit status 1, retry until timeout Nov 26 01:23:43.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 7m40.401s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 7m40.02s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 1m5.137s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc001dce2c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:24:07.761: INFO: rc: 1 Nov 26 01:24:07.761: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:24:09.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:09.790: INFO: rc: 7 Nov 26 01:24:09.790: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:11.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:11.814: INFO: rc: 7 Nov 26 01:24:11.814: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:13.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:13.707: INFO: rc: 7 Nov 26 01:24:13.708: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:15.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:15.792: INFO: rc: 7 Nov 26 01:24:15.792: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:17.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 8m0.403s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 8m0.023s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 1m25.139s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0009182c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:24:17.726: INFO: rc: 7 Nov 26 01:24:17.726: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:19.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:19.746: INFO: rc: 7 Nov 26 01:24:19.746: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:21.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:21.705: INFO: rc: 7 Nov 26 01:24:21.705: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:23.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:23.715: INFO: rc: 7 Nov 26 01:24:23.715: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:25.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:25.890: INFO: rc: 7 Nov 26 01:24:25.890: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:27.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:27.812: INFO: rc: 7 Nov 26 01:24:27.812: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:29.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:29.744: INFO: rc: 7 Nov 26 01:24:29.744: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:31.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:31.715: INFO: rc: 7 Nov 26 01:24:31.715: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:33.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:33.866: INFO: rc: 7 Nov 26 01:24:33.866: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:35.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:35.802: INFO: rc: 7 Nov 26 01:24:35.802: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:37.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 8m20.405s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 8m20.025s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 1m45.141s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc001dce6e0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:24:37.807: INFO: rc: 7 Nov 26 01:24:37.807: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:39.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:39.974: INFO: rc: 7 Nov 26 01:24:39.974: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:41.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:41.715: INFO: rc: 7 Nov 26 01:24:41.715: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:43.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:43.800: INFO: rc: 7 Nov 26 01:24:43.800: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:45.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:45.787: INFO: rc: 7 Nov 26 01:24:45.787: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:47.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:47.660: INFO: rc: 7 Nov 26 01:24:47.660: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:49.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:49.707: INFO: rc: 7 Nov 26 01:24:49.707: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:51.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:51.698: INFO: rc: 7 Nov 26 01:24:51.698: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:53.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:53.859: INFO: rc: 7 Nov 26 01:24:53.859: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:55.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:55.862: INFO: rc: 7 Nov 26 01:24:55.862: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:57.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 8m40.408s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 8m40.028s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 2m5.144s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000918c60?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:24:57.806: INFO: rc: 7 Nov 26 01:24:57.806: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:24:59.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:24:59.641: INFO: rc: 7 Nov 26 01:24:59.641: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:01.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:01.580: INFO: rc: 7 Nov 26 01:25:01.580: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:03.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:03.690: INFO: rc: 7 Nov 26 01:25:03.690: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:05.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:05.581: INFO: rc: 7 Nov 26 01:25:05.581: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:07.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:07.592: INFO: rc: 7 Nov 26 01:25:07.592: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:09.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:09.571: INFO: rc: 7 Nov 26 01:25:09.571: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:11.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:11.580: INFO: rc: 7 Nov 26 01:25:11.580: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:13.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:13.666: INFO: rc: 7 Nov 26 01:25:13.666: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:15.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:15.576: INFO: rc: 7 Nov 26 01:25:15.576: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:17.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 9m0.41s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 9m0.03s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 2m25.146s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0009182c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:25:17.583: INFO: rc: 7 Nov 26 01:25:17.583: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:19.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:19.574: INFO: rc: 7 Nov 26 01:25:19.574: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:21.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:21.574: INFO: rc: 7 Nov 26 01:25:21.574: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:23.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:23.595: INFO: rc: 7 Nov 26 01:25:23.596: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:25.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:25.574: INFO: rc: 7 Nov 26 01:25:25.574: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:27.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:27.596: INFO: rc: 7 Nov 26 01:25:27.596: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:29.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:29.579: INFO: rc: 7 Nov 26 01:25:29.579: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:31.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:31.578: INFO: rc: 7 Nov 26 01:25:31.578: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:33.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:33.601: INFO: rc: 7 Nov 26 01:25:33.601: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:35.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:35.571: INFO: rc: 7 Nov 26 01:25:35.571: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:37.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 9m20.413s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 9m20.033s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 2m45.149s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000dd6420?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:25:37.583: INFO: rc: 7 Nov 26 01:25:37.583: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:39.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:39.567: INFO: rc: 7 Nov 26 01:25:39.567: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:41.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:41.567: INFO: rc: 7 Nov 26 01:25:41.567: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:25:43.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:43.394: INFO: rc: 1 Nov 26 01:25:43.395: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:25:45.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:45.384: INFO: rc: 1 Nov 26 01:25:45.384: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:25:47.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:47.395: INFO: rc: 1 Nov 26 01:25:47.395: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:25:49.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:49.389: INFO: rc: 1 Nov 26 01:25:49.389: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:25:51.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:51.399: INFO: rc: 1 Nov 26 01:25:51.399: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:25:53.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:53.397: INFO: rc: 1 Nov 26 01:25:53.398: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:25:55.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:55.390: INFO: rc: 1 Nov 26 01:25:55.390: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:25:57.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 9m40.416s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 9m40.035s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 3m5.152s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000918b00?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:25:57.389: INFO: rc: 1 Nov 26 01:25:57.389: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:25:59.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:25:59.400: INFO: rc: 1 Nov 26 01:25:59.400: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:01.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:01.390: INFO: rc: 1 Nov 26 01:26:01.390: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:03.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:03.406: INFO: rc: 1 Nov 26 01:26:03.406: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:05.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:05.394: INFO: rc: 1 Nov 26 01:26:05.394: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:07.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:07.469: INFO: rc: 1 Nov 26 01:26:07.470: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:09.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:09.439: INFO: rc: 1 Nov 26 01:26:09.439: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:11.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:11.421: INFO: rc: 1 Nov 26 01:26:11.421: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:13.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:13.399: INFO: rc: 1 Nov 26 01:26:13.399: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:15.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:15.387: INFO: rc: 1 Nov 26 01:26:15.387: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:17.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 10m0.418s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 10m0.037s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 3m25.154s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000cad600?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:26:17.391: INFO: rc: 1 Nov 26 01:26:17.391: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:19.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:19.388: INFO: rc: 1 Nov 26 01:26:19.388: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:21.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:21.393: INFO: rc: 1 Nov 26 01:26:21.393: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:23.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:23.405: INFO: rc: 1 Nov 26 01:26:23.405: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:25.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:25.386: INFO: rc: 1 Nov 26 01:26:25.386: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:27.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:27.399: INFO: rc: 1 Nov 26 01:26:27.400: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:29.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:29.386: INFO: rc: 1 Nov 26 01:26:29.386: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:31.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:31.391: INFO: rc: 1 Nov 26 01:26:31.391: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:33.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:33.406: INFO: rc: 1 Nov 26 01:26:33.406: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:35.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:35.385: INFO: rc: 1 Nov 26 01:26:35.385: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:37.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 10m20.421s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 10m20.04s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 3m45.157s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0009198c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:26:37.390: INFO: rc: 1 Nov 26 01:26:37.390: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:39.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:39.386: INFO: rc: 1 Nov 26 01:26:39.386: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:41.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:41.422: INFO: rc: 1 Nov 26 01:26:41.422: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:43.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:43.396: INFO: rc: 1 Nov 26 01:26:43.396: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:45.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:45.394: INFO: rc: 1 Nov 26 01:26:45.394: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:47.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:47.395: INFO: rc: 1 Nov 26 01:26:47.395: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:49.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:49.394: INFO: rc: 1 Nov 26 01:26:49.394: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:51.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:51.391: INFO: rc: 1 Nov 26 01:26:51.391: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:53.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:53.417: INFO: rc: 1 Nov 26 01:26:53.417: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:55.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:55.382: INFO: rc: 1 Nov 26 01:26:55.382: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:57.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 10m40.423s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 10m40.043s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 4m5.159s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0024b4160?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:26:57.397: INFO: rc: 1 Nov 26 01:26:57.397: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:26:59.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:26:59.392: INFO: rc: 1 Nov 26 01:26:59.392: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:01.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:01.388: INFO: rc: 1 Nov 26 01:27:01.388: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:03.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:03.390: INFO: rc: 1 Nov 26 01:27:03.390: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:05.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:05.383: INFO: rc: 1 Nov 26 01:27:05.383: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:07.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:07.396: INFO: rc: 1 Nov 26 01:27:07.396: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:09.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:09.426: INFO: rc: 1 Nov 26 01:27:09.426: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:11.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:11.404: INFO: rc: 1 Nov 26 01:27:11.404: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:13.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:13.401: INFO: rc: 1 Nov 26 01:27:13.401: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:15.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:15.388: INFO: rc: 1 Nov 26 01:27:15.388: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:17.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 11m0.426s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 11m0.045s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 4m25.162s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000918420?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:27:17.388: INFO: rc: 1 Nov 26 01:27:17.388: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:19.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:19.397: INFO: rc: 1 Nov 26 01:27:19.397: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:21.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:21.395: INFO: rc: 1 Nov 26 01:27:21.395: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:23.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:23.391: INFO: rc: 1 Nov 26 01:27:23.391: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:25.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:25.389: INFO: rc: 1 Nov 26 01:27:25.389: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:27.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:27.386: INFO: rc: 1 Nov 26 01:27:27.386: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:29.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:29.384: INFO: rc: 1 Nov 26 01:27:29.384: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:31.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:31.389: INFO: rc: 1 Nov 26 01:27:31.389: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:33.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:33.398: INFO: rc: 1 Nov 26 01:27:33.398: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:35.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:35.391: INFO: rc: 1 Nov 26 01:27:35.391: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:37.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 11m20.428s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 11m20.047s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 4m45.164s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000918dc0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:27:37.388: INFO: rc: 1 Nov 26 01:27:37.388: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:39.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:39.392: INFO: rc: 1 Nov 26 01:27:39.392: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:41.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:41.387: INFO: rc: 1 Nov 26 01:27:41.387: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:43.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:43.400: INFO: rc: 1 Nov 26 01:27:43.400: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:45.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:45.419: INFO: rc: 1 Nov 26 01:27:45.419: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:47.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:47.394: INFO: rc: 1 Nov 26 01:27:47.394: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:49.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:49.391: INFO: rc: 1 Nov 26 01:27:49.391: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:51.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:51.386: INFO: rc: 1 Nov 26 01:27:51.386: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:53.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:53.384: INFO: rc: 1 Nov 26 01:27:53.384: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:55.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:55.394: INFO: rc: 1 Nov 26 01:27:55.394: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:57.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 11m40.43s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 11m40.049s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 5m5.166s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000919760?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:27:57.386: INFO: rc: 1 Nov 26 01:27:57.386: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:27:59.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:27:59.387: INFO: rc: 1 Nov 26 01:27:59.387: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:01.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:01.390: INFO: rc: 1 Nov 26 01:28:01.390: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:03.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:03.390: INFO: rc: 1 Nov 26 01:28:03.390: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:05.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:05.385: INFO: rc: 1 Nov 26 01:28:05.385: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:07.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:07.392: INFO: rc: 1 Nov 26 01:28:07.392: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:09.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:09.383: INFO: rc: 1 Nov 26 01:28:09.383: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:11.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:11.388: INFO: rc: 1 Nov 26 01:28:11.388: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:13.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:13.429: INFO: rc: 1 Nov 26 01:28:13.429: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:15.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:15.435: INFO: rc: 1 Nov 26 01:28:15.435: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:17.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 12m0.432s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 12m0.051s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 5m25.168s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0005db600?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:28:17.387: INFO: rc: 1 Nov 26 01:28:17.387: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:19.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:19.390: INFO: rc: 1 Nov 26 01:28:19.390: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:21.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:21.386: INFO: rc: 1 Nov 26 01:28:21.386: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:23.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:23.525: INFO: rc: 1 Nov 26 01:28:23.525: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:25.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:25.387: INFO: rc: 1 Nov 26 01:28:25.387: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:27.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:27.389: INFO: rc: 1 Nov 26 01:28:27.389: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:29.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:29.388: INFO: rc: 1 Nov 26 01:28:29.388: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:31.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:31.397: INFO: rc: 1 Nov 26 01:28:31.397: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:33.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:33.388: INFO: rc: 1 Nov 26 01:28:33.388: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:35.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:35.396: INFO: rc: 1 Nov 26 01:28:35.396: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:37.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 12m20.435s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 12m20.054s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 5m45.171s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000919ce0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:28:37.390: INFO: rc: 1 Nov 26 01:28:37.390: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:39.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:39.394: INFO: rc: 1 Nov 26 01:28:39.394: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:41.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:41.411: INFO: rc: 1 Nov 26 01:28:41.411: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:43.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:43.391: INFO: rc: 1 Nov 26 01:28:43.391: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:45.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:45.387: INFO: rc: 1 Nov 26 01:28:45.387: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:47.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:47.388: INFO: rc: 1 Nov 26 01:28:47.388: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:49.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:49.386: INFO: rc: 1 Nov 26 01:28:49.386: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:51.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:51.394: INFO: rc: 1 Nov 26 01:28:51.394: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:53.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:53.396: INFO: rc: 1 Nov 26 01:28:53.396: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:55.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:55.388: INFO: rc: 1 Nov 26 01:28:55.388: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:57.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 12m40.438s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 12m40.057s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 6m5.174s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0024b42c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:28:57.395: INFO: rc: 1 Nov 26 01:28:57.395: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:28:59.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:28:59.391: INFO: rc: 1 Nov 26 01:28:59.391: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:29:01.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:01.391: INFO: rc: 1 Nov 26 01:29:01.391: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:29:03.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:03.404: INFO: rc: 1 Nov 26 01:29:03.404: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:29:05.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:05.185: INFO: rc: 1 Nov 26 01:29:05.185: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:07.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:07.169: INFO: rc: 1 Nov 26 01:29:07.169: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:09.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:09.168: INFO: rc: 1 Nov 26 01:29:09.168: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:11.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:11.172: INFO: rc: 1 Nov 26 01:29:11.172: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:13.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:13.176: INFO: rc: 1 Nov 26 01:29:13.176: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:15.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:15.169: INFO: rc: 1 Nov 26 01:29:15.169: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:17.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:17.170: INFO: rc: 1 Nov 26 01:29:17.170: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 13m0.44s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 13m0.059s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 6m25.176s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:29:19.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:19.176: INFO: rc: 1 Nov 26 01:29:19.176: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:21.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:21.170: INFO: rc: 1 Nov 26 01:29:21.170: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:23.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:23.174: INFO: rc: 1 Nov 26 01:29:23.174: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:25.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:25.172: INFO: rc: 1 Nov 26 01:29:25.172: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:27.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:27.169: INFO: rc: 1 Nov 26 01:29:27.169: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:29.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:29.172: INFO: rc: 1 Nov 26 01:29:29.172: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:31.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:31.178: INFO: rc: 1 Nov 26 01:29:31.178: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:33.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:33.175: INFO: rc: 1 Nov 26 01:29:33.175: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:35.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:35.168: INFO: rc: 1 Nov 26 01:29:35.168: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:37.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:37.168: INFO: rc: 1 Nov 26 01:29:37.168: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 13m20.442s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 13m20.061s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 6m45.178s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:29:39.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:39.169: INFO: rc: 1 Nov 26 01:29:39.169: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:41.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:41.174: INFO: rc: 1 Nov 26 01:29:41.174: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:43.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:43.173: INFO: rc: 1 Nov 26 01:29:43.173: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:45.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:45.168: INFO: rc: 1 Nov 26 01:29:45.168: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:47.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:47.170: INFO: rc: 1 Nov 26 01:29:47.170: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:49.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:49.166: INFO: rc: 1 Nov 26 01:29:49.166: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:51.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:51.168: INFO: rc: 1 Nov 26 01:29:51.168: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: The connection to the server 34.168.44.214 was refused - did you specify the right host or port? error: exit status 1, retry until timeout Nov 26 01:29:53.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:54.954: INFO: rc: 1 Nov 26 01:29:54.954: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:29:55.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:55.389: INFO: rc: 1 Nov 26 01:29:55.389: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:29:57.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 13m40.445s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 13m40.064s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 7m5.181s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0024b4840?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:29:57.389: INFO: rc: 1 Nov 26 01:29:57.389: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:29:59.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:29:59.387: INFO: rc: 1 Nov 26 01:29:59.387: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:30:01.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:01.386: INFO: rc: 1 Nov 26 01:30:01.386: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1, retry until timeout Nov 26 01:30:03.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:03.594: INFO: rc: 7 Nov 26 01:30:03.594: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:05.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:05.636: INFO: rc: 7 Nov 26 01:30:05.637: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:07.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:07.633: INFO: rc: 7 Nov 26 01:30:07.633: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:09.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:09.566: INFO: rc: 7 Nov 26 01:30:09.566: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:11.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:11.595: INFO: rc: 7 Nov 26 01:30:11.595: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:13.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:13.595: INFO: rc: 7 Nov 26 01:30:13.595: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:15.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:15.577: INFO: rc: 7 Nov 26 01:30:15.577: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:17.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 14m0.447s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 14m0.067s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 7m25.183s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0024b4dc0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:30:17.591: INFO: rc: 7 Nov 26 01:30:17.591: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:19.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:19.576: INFO: rc: 7 Nov 26 01:30:19.576: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:21.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:21.581: INFO: rc: 7 Nov 26 01:30:21.581: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:23.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:23.585: INFO: rc: 7 Nov 26 01:30:23.586: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:25.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:25.585: INFO: rc: 7 Nov 26 01:30:25.585: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:27.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:27.591: INFO: rc: 7 Nov 26 01:30:27.591: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:29.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:29.578: INFO: rc: 7 Nov 26 01:30:29.578: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:31.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:31.586: INFO: rc: 7 Nov 26 01:30:31.586: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:33.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:33.590: INFO: rc: 7 Nov 26 01:30:33.590: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:35.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:35.603: INFO: rc: 7 Nov 26 01:30:35.603: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:37.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 14m20.449s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 14m20.069s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 7m45.185s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000918dc0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:30:37.589: INFO: rc: 7 Nov 26 01:30:37.589: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:39.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:39.581: INFO: rc: 7 Nov 26 01:30:39.581: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:41.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:41.586: INFO: rc: 7 Nov 26 01:30:41.586: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:43.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:43.576: INFO: rc: 7 Nov 26 01:30:43.576: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:45.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:45.640: INFO: rc: 7 Nov 26 01:30:45.640: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:47.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:47.595: INFO: rc: 7 Nov 26 01:30:47.595: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:49.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:49.566: INFO: rc: 7 Nov 26 01:30:49.566: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:51.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:51.572: INFO: rc: 7 Nov 26 01:30:51.572: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:53.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:53.596: INFO: rc: 7 Nov 26 01:30:53.596: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:55.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:55.569: INFO: rc: 7 Nov 26 01:30:55.569: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:57.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 14m40.452s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 14m40.071s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 8m5.188s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000919760?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:30:57.618: INFO: rc: 7 Nov 26 01:30:57.618: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:30:59.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:30:59.576: INFO: rc: 7 Nov 26 01:30:59.576: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:01.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:01.567: INFO: rc: 7 Nov 26 01:31:01.567: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:03.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:03.578: INFO: rc: 7 Nov 26 01:31:03.578: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:05.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:05.570: INFO: rc: 7 Nov 26 01:31:05.570: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:07.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:07.638: INFO: rc: 7 Nov 26 01:31:07.638: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:09.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:09.576: INFO: rc: 7 Nov 26 01:31:09.576: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:11.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:11.561: INFO: rc: 7 Nov 26 01:31:11.561: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:13.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:13.583: INFO: rc: 7 Nov 26 01:31:13.583: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:15.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:15.565: INFO: rc: 7 Nov 26 01:31:15.565: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:17.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 15m0.454s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 15m0.074s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 8m25.19s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000dd62c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:31:17.654: INFO: rc: 7 Nov 26 01:31:17.654: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:19.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:19.575: INFO: rc: 7 Nov 26 01:31:19.575: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:21.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:21.611: INFO: rc: 7 Nov 26 01:31:21.611: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:23.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:23.572: INFO: rc: 7 Nov 26 01:31:23.572: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:25.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:25.677: INFO: rc: 7 Nov 26 01:31:25.677: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:27.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:27.705: INFO: rc: 7 Nov 26 01:31:27.705: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:29.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:29.578: INFO: rc: 7 Nov 26 01:31:29.578: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:31.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:31.586: INFO: rc: 7 Nov 26 01:31:31.586: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:33.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:33.684: INFO: rc: 7 Nov 26 01:31:33.684: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:35.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:35.607: INFO: rc: 7 Nov 26 01:31:35.607: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:37.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 15m20.456s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 15m20.076s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 8m45.192s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0024b42c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:31:37.652: INFO: rc: 7 Nov 26 01:31:37.652: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:39.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:39.571: INFO: rc: 7 Nov 26 01:31:39.571: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:41.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:41.579: INFO: rc: 7 Nov 26 01:31:41.579: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:43.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:43.661: INFO: rc: 7 Nov 26 01:31:43.661: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:45.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:45.571: INFO: rc: 7 Nov 26 01:31:45.571: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:47.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:47.589: INFO: rc: 7 Nov 26 01:31:47.589: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:49.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:49.570: INFO: rc: 7 Nov 26 01:31:49.570: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:51.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:51.573: INFO: rc: 7 Nov 26 01:31:51.573: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:53.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:53.657: INFO: rc: 7 Nov 26 01:31:53.657: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:55.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:55.599: INFO: rc: 7 Nov 26 01:31:55.599: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:57.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 15m40.459s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 15m40.078s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 9m5.195s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000dd7340?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:31:57.593: INFO: rc: 7 Nov 26 01:31:57.593: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:31:59.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:31:59.690: INFO: rc: 7 Nov 26 01:31:59.690: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:01.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:01.713: INFO: rc: 7 Nov 26 01:32:01.713: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:03.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:03.724: INFO: rc: 7 Nov 26 01:32:03.724: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:05.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:06.894: INFO: rc: 7 Nov 26 01:32:06.894: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:07.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:07.714: INFO: rc: 7 Nov 26 01:32:07.714: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:09.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:09.799: INFO: rc: 7 Nov 26 01:32:09.799: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:11.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:11.679: INFO: rc: 7 Nov 26 01:32:11.679: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:13.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:13.741: INFO: rc: 7 Nov 26 01:32:13.741: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:15.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:15.766: INFO: rc: 7 Nov 26 01:32:15.766: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:17.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 16m0.461s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 16m0.081s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 9m25.197s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0024b46e0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:32:17.748: INFO: rc: 7 Nov 26 01:32:17.748: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:19.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:19.799: INFO: rc: 7 Nov 26 01:32:19.799: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:21.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:21.669: INFO: rc: 7 Nov 26 01:32:21.669: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:23.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:23.682: INFO: rc: 7 Nov 26 01:32:23.682: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:25.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:25.866: INFO: rc: 7 Nov 26 01:32:25.866: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:27.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:27.596: INFO: rc: 7 Nov 26 01:32:27.596: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:29.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:29.696: INFO: rc: 7 Nov 26 01:32:29.696: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:31.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:31.609: INFO: rc: 7 Nov 26 01:32:31.609: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:33.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:33.647: INFO: rc: 7 Nov 26 01:32:33.647: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:35.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:35.655: INFO: rc: 7 Nov 26 01:32:35.655: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:37.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 16m20.464s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 16m20.084s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 9m45.2s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0024b4c60?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0030a1fe0?, 0x1?}, {0xc002cc7ad8?, 0x101010020?, 0x0?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.3() test/e2e/network/loadbalancer.go:1468 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc00013a000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cb3e18, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0xb0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00013a000}, 0x0?, 0xc002cc7d00?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000147600?, 0x78?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1467 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 01:32:37.625: INFO: rc: 7 Nov 26 01:32:37.625: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:39.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:39.699: INFO: rc: 7 Nov 26 01:32:39.699: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:41.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:41.682: INFO: rc: 7 Nov 26 01:32:41.682: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:43.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:43.665: INFO: rc: 7 Nov 26 01:32:43.665: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:45.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:45.675: INFO: rc: 7 Nov 26 01:32:45.675: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:47.057: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:47.742: INFO: rc: 7 Nov 26 01:32:47.742: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:49.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:49.674: INFO: rc: 7 Nov 26 01:32:49.674: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:51.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:51.726: INFO: rc: 7 Nov 26 01:32:51.726: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:53.056: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:53.646: INFO: rc: 7 Nov 26 01:32:53.646: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:53.646: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip' Nov 26 01:32:54.350: INFO: rc: 7 Nov 26 01:32:54.351: INFO: got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 exec pause-pod-deployment-6864d4f788-wk5sg -- /bin/sh -x -c curl -q -s --connect-timeout 30 34.168.78.235:80/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 34.168.78.235:80/clientip command terminated with exit code 7 error: exit status 7, retry until timeout Nov 26 01:32:54.351: FAIL: Source IP not preserved from pause-pod-deployment-6864d4f788-wk5sg, expected '10.64.1.63' got '' Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1476 +0xabd Nov 26 01:32:54.351: INFO: Deleting deployment Nov 26 01:32:54.661: INFO: Waiting up to 15m0s for service "external-local-pods" to have no LoadBalancer ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 16m40.467s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 16m40.087s) test/e2e/network/loadbalancer.go:1422 At [By Step] Hitting external lb 34.168.78.235 from pod pause-pod-deployment-6864d4f788-wk5sg on node bootstrap-e2e-minion-group-2982 (Step Runtime: 10m5.203s) test/e2e/network/loadbalancer.go:1466 Spec Goroutine goroutine 2188 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00013a000}, 0xc000a374d0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00013a000}, 0x90?, 0x2fd9d05?, 0x48?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc00013a000}, 0xc002cc76f0?, 0xc002cc76e0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x7ffe748874fd?, 0xa?, 0x7fe0bc8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/providers/gce.(*Provider).EnsureLoadBalancerResourcesDeleted(0xc000e9a7f0, {0xc0010d23e0, 0xd}, {0x77c6ae2, 0x2}) test/e2e/framework/providers/gce/gce.go:195 k8s.io/kubernetes/test/e2e/framework.EnsureLoadBalancerResourcesDeleted(...) test/e2e/framework/util.go:551 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancerDestroy.func1() test/e2e/framework/service/jig.go:602 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancerDestroy(0xc003ff0500, {0xc0010d23e0?, 0x23?}, 0xc002cc7910?, 0x26282e7?) test/e2e/framework/service/jig.go:614 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).ChangeServiceType(0x0?, {0x75c5095?, 0x0?}, 0x0?) test/e2e/framework/service/jig.go:186 > k8s.io/kubernetes/test/e2e/network.glob..func20.6.1() test/e2e/network/loadbalancer.go:1431 panic({0x70eb7e0, 0xc0007cc690}) /usr/local/go/src/runtime/panic.go:884 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc000f1de00, 0x75}, {0xc002cc7cb8?, 0x75b521a?, 0xc002cc7ce0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 k8s.io/kubernetes/test/e2e/framework.Failf({0x7709565?, 0xa?}, {0xc002cc7f28?, 0x0?, 0x3?}) test/e2e/framework/log.go:49 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1476 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc001f66300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 01:33:05.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 01:33:05.286: INFO: Output of kubectl describe svc: Nov 26 01:33:05.286: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.168.44.214 --kubeconfig=/workspace/.kube/config --namespace=esipp-9254 describe svc --namespace=esipp-9254' Nov 26 01:33:05.624: INFO: stderr: "" Nov 26 01:33:05.624: INFO: stdout: "Name: external-local-pods\nNamespace: esipp-9254\nLabels: testid=external-local-pods-747387b2-86bb-42d7-9f44-adb6f7eb5d03\nAnnotations: <none>\nSelector: testid=external-local-pods-747387b2-86bb-42d7-9f44-adb6f7eb5d03\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.47.231\nIPs: 10.0.47.231\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nEndpoints: \nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 16m service-controller Ensuring load balancer\n Normal EnsuringLoadBalancer 10m service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 10m service-controller Ensured load balancer\n Normal EnsuringLoadBalancer 8m57s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 8m56s service-controller Ensured load balancer\n Normal EnsuredLoadBalancer 5m50s service-controller Ensured load balancer\n Normal EnsuringLoadBalancer 68s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 66s service-controller Ensured load balancer\n Normal Type 11s service-controller LoadBalancer -> ClusterIP\n Normal DeletingLoadBalancer 11s service-controller Deleting load balancer\n" Nov 26 01:33:05.624: INFO: Name: external-local-pods Namespace: esipp-9254 Labels: testid=external-local-pods-747387b2-86bb-42d7-9f44-adb6f7eb5d03 Annotations: <none> Selector: testid=external-local-pods-747387b2-86bb-42d7-9f44-adb6f7eb5d03 Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.47.231 IPs: 10.0.47.231 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 16m service-controller Ensuring load balancer Normal EnsuringLoadBalancer 10m service-controller Ensuring load balancer Normal EnsuredLoadBalancer 10m service-controller Ensured load balancer Normal EnsuringLoadBalancer 8m57s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 8m56s service-controller Ensured load balancer Normal EnsuredLoadBalancer 5m50s service-controller Ensured load balancer Normal EnsuringLoadBalancer 68s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 66s service-controller Ensured load balancer Normal Type 11s service-controller LoadBalancer -> ClusterIP Normal DeletingLoadBalancer 11s service-controller Deleting load balancer [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:33:05.625 STEP: Collecting events from namespace "esipp-9254". 11/26/22 01:33:05.625 STEP: Found 28 events. 11/26/22 01:33:05.681 Nov 26 01:33:05.681: INFO: At 2022-11-26 01:16:31 +0000 UTC - event for external-local-pods: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:44 +0000 UTC - event for external-local-pods: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:45 +0000 UTC - event for external-local-pods: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:47 +0000 UTC - event for external-local-pods: {replication-controller } SuccessfulCreate: Created pod: external-local-pods-tbf9t Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:47 +0000 UTC - event for external-local-pods-tbf9t: {default-scheduler } Scheduled: Successfully assigned esipp-9254/external-local-pods-tbf9t to bootstrap-e2e-minion-group-2982 Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:48 +0000 UTC - event for external-local-pods-tbf9t: {kubelet bootstrap-e2e-minion-group-2982} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:48 +0000 UTC - event for external-local-pods-tbf9t: {kubelet bootstrap-e2e-minion-group-2982} Created: Created container netexec Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:48 +0000 UTC - event for external-local-pods-tbf9t: {kubelet bootstrap-e2e-minion-group-2982} Started: Started container netexec Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:49 +0000 UTC - event for pause-pod-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set pause-pod-deployment-6864d4f788 to 1 Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:49 +0000 UTC - event for pause-pod-deployment-6864d4f788: {replicaset-controller } SuccessfulCreate: Created pod: pause-pod-deployment-6864d4f788-wk5sg Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:50 +0000 UTC - event for pause-pod-deployment-6864d4f788-wk5sg: {default-scheduler } Scheduled: Successfully assigned esipp-9254/pause-pod-deployment-6864d4f788-wk5sg to bootstrap-e2e-minion-group-2982 Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:50 +0000 UTC - event for pause-pod-deployment-6864d4f788-wk5sg: {kubelet bootstrap-e2e-minion-group-2982} Created: Created container agnhost-pause Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:50 +0000 UTC - event for pause-pod-deployment-6864d4f788-wk5sg: {kubelet bootstrap-e2e-minion-group-2982} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 01:33:05.681: INFO: At 2022-11-26 01:22:51 +0000 UTC - event for pause-pod-deployment-6864d4f788-wk5sg: {kubelet bootstrap-e2e-minion-group-2982} Started: Started container agnhost-pause Nov 26 01:33:05.681: INFO: At 2022-11-26 01:23:52 +0000 UTC - event for external-local-pods-tbf9t: {kubelet bootstrap-e2e-minion-group-2982} Killing: Stopping container netexec Nov 26 01:33:05.681: INFO: At 2022-11-26 01:23:53 +0000 UTC - event for external-local-pods-tbf9t: {kubelet bootstrap-e2e-minion-group-2982} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 01:33:05.681: INFO: At 2022-11-26 01:23:54 +0000 UTC - event for external-local-pods-tbf9t: {kubelet bootstrap-e2e-minion-group-2982} Unhealthy: Readiness probe failed: Get "http://10.64.1.61:80/hostName": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 01:33:05.681: INFO: At 2022-11-26 01:23:58 +0000 UTC - event for external-local-pods-tbf9t: {kubelet bootstrap-e2e-minion-group-2982} BackOff: Back-off restarting failed container netexec in pod external-local-pods-tbf9t_esipp-9254(6e90f519-b725-4bd4-9ab8-ab2278c53bd3) Nov 26 01:33:05.681: INFO: At 2022-11-26 01:24:08 +0000 UTC - event for external-local-pods: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 01:33:05.681: INFO: At 2022-11-26 01:24:09 +0000 UTC - event for external-local-pods: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 01:33:05.681: INFO: At 2022-11-26 01:26:54 +0000 UTC - event for pause-pod-deployment-6864d4f788-wk5sg: {kubelet bootstrap-e2e-minion-group-2982} Killing: Stopping container agnhost-pause Nov 26 01:33:05.681: INFO: At 2022-11-26 01:26:55 +0000 UTC - event for pause-pod-deployment-6864d4f788-wk5sg: {kubelet bootstrap-e2e-minion-group-2982} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 01:33:05.681: INFO: At 2022-11-26 01:27:15 +0000 UTC - event for external-local-pods: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 01:33:05.681: INFO: At 2022-11-26 01:28:03 +0000 UTC - event for pause-pod-deployment-6864d4f788-wk5sg: {kubelet bootstrap-e2e-minion-group-2982} BackOff: Back-off restarting failed container agnhost-pause in pod pause-pod-deployment-6864d4f788-wk5sg_esipp-9254(ff14f605-b793-4727-b22d-943a4460592d) Nov 26 01:33:05.681: INFO: At 2022-11-26 01:31:57 +0000 UTC - event for external-local-pods: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 01:33:05.681: INFO: At 2022-11-26 01:31:59 +0000 UTC - event for external-local-pods: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 01:33:05.681: INFO: At 2022-11-26 01:32:54 +0000 UTC - event for external-local-pods: {service-controller } DeletingLoadBalancer: Deleting load balancer Nov 26 01:33:05.681: INFO: At 2022-11-26 01:32:54 +0000 UTC - event for external-local-pods: {service-controller } Type: LoadBalancer -> ClusterIP Nov 26 01:33:05.725: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 01:33:05.725: INFO: external-local-pods-tbf9t bootstrap-e2e-minion-group-2982 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:22:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:32:12 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:32:12 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:22:47 +0000 UTC }] Nov 26 01:33:05.725: INFO: Nov 26 01:33:05.842: INFO: Logging node info for node bootstrap-e2e-master Nov 26 01:33:05.894: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master f052a6f7-0c51-4660-967d-6ec4c5208a42 15838 0 2022-11-26 00:56:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 01:32:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:32:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:32:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:32:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:32:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.44.214,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:df6bcb3c-a5ed-497f-83f2-74f13e952c28,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 01:33:05.894: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 01:33:05.990: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 01:33:06.071: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.071: INFO: Container kube-scheduler ready: true, restart count 6 Nov 26 01:33:06.071: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.071: INFO: Container etcd-container ready: true, restart count 7 Nov 26 01:33:06.071: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.071: INFO: Container etcd-container ready: true, restart count 4 Nov 26 01:33:06.071: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.071: INFO: Container konnectivity-server-container ready: true, restart count 2 Nov 26 01:33:06.071: INFO: metadata-proxy-v0.1-8h6mf started at 2022-11-26 00:56:42 +0000 UTC (0+2 container statuses recorded) Nov 26 01:33:06.071: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:33:06.071: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:33:06.071: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.071: INFO: Container kube-apiserver ready: true, restart count 4 Nov 26 01:33:06.071: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 00:55:56 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.071: INFO: Container kube-controller-manager ready: true, restart count 11 Nov 26 01:33:06.071: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 00:56:12 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.071: INFO: Container kube-addon-manager ready: true, restart count 3 Nov 26 01:33:06.071: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 00:56:13 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.071: INFO: Container l7-lb-controller ready: true, restart count 10 Nov 26 01:33:06.287: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 01:33:06.287: INFO: Logging node info for node bootstrap-e2e-minion-group-0hjv Nov 26 01:33:06.340: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0hjv aba0e90f-9c40-4934-aeed-e719199f0cec 15582 0 2022-11-26 00:56:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0hjv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-0hjv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-5652":"bootstrap-e2e-minion-group-0hjv"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 01:16:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 01:31:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {node-problem-detector Update v1 2022-11-26 01:31:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-0hjv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:31:36 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:31:36 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:31:36 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:31:36 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.247.74.12,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0hjv.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0hjv.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f702fe377ef6bb569afbb12e0158ab5,SystemUUID:7f702fe3-77ef-6bb5-69af-bb12e0158ab5,BootID:7bec61c0-e888-4acc-a61d-e6fb73a87068,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1487^b0f6a40b-6d25-11ed-9ba3-ceb25206bbbd],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1487^b0f6a40b-6d25-11ed-9ba3-ceb25206bbbd,DevicePath:,},},Config:nil,},} Nov 26 01:33:06.341: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0hjv Nov 26 01:33:06.422: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0hjv Nov 26 01:33:06.608: INFO: metadata-proxy-v0.1-8d7ds started at 2022-11-26 00:56:40 +0000 UTC (0+2 container statuses recorded) Nov 26 01:33:06.608: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:33:06.608: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:33:06.608: INFO: volume-snapshot-controller-0 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container volume-snapshot-controller ready: true, restart count 9 Nov 26 01:33:06.608: INFO: pod-subpath-test-dynamicpv-2vf4 started at 2022-11-26 01:00:19 +0000 UTC (1+2 container statuses recorded) Nov 26 01:33:06.608: INFO: Init container init-volume-dynamicpv-2vf4 ready: true, restart count 1 Nov 26 01:33:06.608: INFO: Container test-container-subpath-dynamicpv-2vf4 ready: false, restart count 3 Nov 26 01:33:06.608: INFO: Container test-container-volume-dynamicpv-2vf4 ready: false, restart count 3 Nov 26 01:33:06.608: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-kpcm8 started at 2022-11-26 00:59:55 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container agnhost-container ready: true, restart count 7 Nov 26 01:33:06.608: INFO: netserver-0 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container webserver ready: false, restart count 9 Nov 26 01:33:06.608: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-ct8rx started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container agnhost-container ready: true, restart count 7 Nov 26 01:33:06.608: INFO: pod-configmaps-cc7f33ac-2f26-44c6-ad1b-c8b91ecdfde7 started at 2022-11-26 01:02:24 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:33:06.608: INFO: l7-default-backend-8549d69d99-x8spc started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 01:33:06.608: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:15:34 +0000 UTC (0+7 container statuses recorded) Nov 26 01:33:06.608: INFO: Container csi-attacher ready: false, restart count 5 Nov 26 01:33:06.608: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 01:33:06.608: INFO: Container csi-resizer ready: false, restart count 5 Nov 26 01:33:06.608: INFO: Container csi-snapshotter ready: false, restart count 5 Nov 26 01:33:06.608: INFO: Container hostpath ready: false, restart count 5 Nov 26 01:33:06.608: INFO: Container liveness-probe ready: false, restart count 5 Nov 26 01:33:06.608: INFO: Container node-driver-registrar ready: false, restart count 5 Nov 26 01:33:06.608: INFO: pod-subpath-test-inlinevolume-v5md started at 2022-11-26 01:00:23 +0000 UTC (1+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Init container init-volume-inlinevolume-v5md ready: true, restart count 0 Nov 26 01:33:06.608: INFO: Container test-container-subpath-inlinevolume-v5md ready: false, restart count 0 Nov 26 01:33:06.608: INFO: coredns-6d97d5ddb-ghpwb started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container coredns ready: false, restart count 10 Nov 26 01:33:06.608: INFO: konnectivity-agent-4brl9 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container konnectivity-agent ready: false, restart count 9 Nov 26 01:33:06.608: INFO: netserver-0 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container webserver ready: false, restart count 6 Nov 26 01:33:06.608: INFO: httpd started at 2022-11-26 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container httpd ready: false, restart count 11 Nov 26 01:33:06.608: INFO: netserver-0 started at 2022-11-26 01:06:00 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container webserver ready: false, restart count 8 Nov 26 01:33:06.608: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-5md2t started at 2022-11-26 01:03:01 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container agnhost-container ready: true, restart count 9 Nov 26 01:33:06.608: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:48 +0000 UTC (0+4 container statuses recorded) Nov 26 01:33:06.608: INFO: Container busybox ready: false, restart count 8 Nov 26 01:33:06.608: INFO: Container csi-provisioner ready: true, restart count 9 Nov 26 01:33:06.608: INFO: Container driver-registrar ready: false, restart count 9 Nov 26 01:33:06.608: INFO: Container mock ready: false, restart count 9 Nov 26 01:33:06.608: INFO: ss-0 started at 2022-11-26 01:00:02 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container webserver ready: true, restart count 13 Nov 26 01:33:06.608: INFO: kube-proxy-bootstrap-e2e-minion-group-0hjv started at 2022-11-26 00:56:39 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container kube-proxy ready: false, restart count 10 Nov 26 01:33:06.608: INFO: kube-dns-autoscaler-5f6455f985-2brqn started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container autoscaler ready: true, restart count 10 Nov 26 01:33:06.608: INFO: pod-configmaps-a8d056c0-ff53-45cb-8c13-ec73b1032b04 started at 2022-11-26 01:00:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:33:06.608: INFO: pod-d647abcb-295b-4ba3-bb3b-72f4c6f3de02 started at 2022-11-26 00:59:12 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:33:06.608: INFO: hostexec-bootstrap-e2e-minion-group-0hjv-bkkbv started at 2022-11-26 01:03:25 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:06.608: INFO: Container agnhost-container ready: false, restart count 7 Nov 26 01:33:06.608: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:12:52 +0000 UTC (0+7 container statuses recorded) Nov 26 01:33:06.608: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 01:33:06.608: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 01:33:06.608: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 01:33:06.608: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 01:33:06.608: INFO: Container hostpath ready: true, restart count 5 Nov 26 01:33:06.608: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 01:33:06.608: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 01:33:06.915: INFO: Latency metrics for node bootstrap-e2e-minion-group-0hjv Nov 26 01:33:06.915: INFO: Logging node info for node bootstrap-e2e-minion-group-2982 Nov 26 01:33:06.963: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2982 23ac061c-c1e5-4314-9c38-31fd0e0866cb 15877 0 2022-11-26 00:56:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2982 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-2982 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-2174":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-2301":"bootstrap-e2e-minion-group-2982","csi-hostpath-provisioning-8735":"bootstrap-e2e-minion-group-2982","csi-hostpath-volumemode-9250":"bootstrap-e2e-minion-group-2982","csi-mock-csi-mock-volumes-9268":"bootstrap-e2e-minion-group-2982"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 00:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 01:27:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 01:31:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 01:32:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-2982,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:31:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:31:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:31:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:31:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:31:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:31:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:31:46 +0000 UTC,LastTransitionTime:2022-11-26 00:56:42 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:29:58 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:29:58 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:29:58 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:29:58 +0000 UTC,LastTransitionTime:2022-11-26 00:56:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.83.251.2,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2982.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2982.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2696a1914e0c43baf9af45da97c22a96,SystemUUID:2696a191-4e0c-43ba-f9af-45da97c22a96,BootID:100bea17-3104-47ce-b900-733cee1dfe77,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-7474^8eb7794d-6d25-11ed-9bf8-7ec81e6e10fe kubernetes.io/csi/csi-hostpath-provisioning-9114^19d7a84a-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19d993ab-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19dac6af-6d26-11ed-82a4-7efb5be84aec kubernetes.io/csi/csi-hostpath-provisioning-9114^19e2bbbd-6d26-11ed-82a4-7efb5be84aec],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19dac6af-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19d993ab-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7474^8eb7794d-6d25-11ed-9bf8-7ec81e6e10fe,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19e2bbbd-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9114^19d7a84a-6d26-11ed-82a4-7efb5be84aec,DevicePath:,},},Config:nil,},} Nov 26 01:33:06.963: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2982 Nov 26 01:33:07.022: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2982 Nov 26 01:33:07.128: INFO: konnectivity-agent-kbwq2 started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container konnectivity-agent ready: false, restart count 9 Nov 26 01:33:07.128: INFO: netserver-1 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container webserver ready: true, restart count 11 Nov 26 01:33:07.128: INFO: pod-subpath-test-preprovisionedpv-xdzr started at 2022-11-26 01:02:38 +0000 UTC (1+2 container statuses recorded) Nov 26 01:33:07.128: INFO: Init container init-volume-preprovisionedpv-xdzr ready: true, restart count 0 Nov 26 01:33:07.128: INFO: Container test-container-subpath-preprovisionedpv-xdzr ready: false, restart count 8 Nov 26 01:33:07.128: INFO: Container test-container-volume-preprovisionedpv-xdzr ready: true, restart count 8 Nov 26 01:33:07.128: INFO: csi-hostpathplugin-0 started at 2022-11-26 00:59:30 +0000 UTC (0+7 container statuses recorded) Nov 26 01:33:07.128: INFO: Container csi-attacher ready: true, restart count 9 Nov 26 01:33:07.128: INFO: Container csi-provisioner ready: true, restart count 9 Nov 26 01:33:07.128: INFO: Container csi-resizer ready: true, restart count 9 Nov 26 01:33:07.128: INFO: Container csi-snapshotter ready: true, restart count 9 Nov 26 01:33:07.128: INFO: Container hostpath ready: true, restart count 9 Nov 26 01:33:07.128: INFO: Container liveness-probe ready: true, restart count 9 Nov 26 01:33:07.128: INFO: Container node-driver-registrar ready: true, restart count 9 Nov 26 01:33:07.128: INFO: hostexec-bootstrap-e2e-minion-group-2982-p2ns7 started at 2022-11-26 00:59:16 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container agnhost-container ready: true, restart count 7 Nov 26 01:33:07.128: INFO: csi-mockplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+3 container statuses recorded) Nov 26 01:33:07.128: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 01:33:07.128: INFO: Container driver-registrar ready: true, restart count 4 Nov 26 01:33:07.128: INFO: Container mock ready: true, restart count 4 Nov 26 01:33:07.128: INFO: ilb-host-exec started at 2022-11-26 01:12:53 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container agnhost-container ready: false, restart count 4 Nov 26 01:33:07.128: INFO: metrics-server-v0.5.2-867b8754b9-w4frb started at 2022-11-26 00:57:14 +0000 UTC (0+2 container statuses recorded) Nov 26 01:33:07.128: INFO: Container metrics-server ready: false, restart count 9 Nov 26 01:33:07.128: INFO: Container metrics-server-nanny ready: false, restart count 11 Nov 26 01:33:07.128: INFO: hostexec-bootstrap-e2e-minion-group-2982-kxg4f started at 2022-11-26 01:00:17 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 01:33:07.128: INFO: hostexec-bootstrap-e2e-minion-group-2982-hqtxc started at 2022-11-26 01:02:39 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container agnhost-container ready: true, restart count 9 Nov 26 01:33:07.128: INFO: pod-4db8d57c-3453-4b56-99f5-8158379eb684 started at 2022-11-26 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:33:07.128: INFO: ss-1 started at 2022-11-26 01:02:07 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container webserver ready: true, restart count 7 Nov 26 01:33:07.128: INFO: hostexec-bootstrap-e2e-minion-group-2982-xmc6r started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:33:07.128: INFO: pod-a9bf9170-0527-4b88-ab1c-09ab6058409d started at 2022-11-26 01:03:43 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:33:07.128: INFO: pod-subpath-test-inlinevolume-wppj started at 2022-11-26 00:59:05 +0000 UTC (1+2 container statuses recorded) Nov 26 01:33:07.128: INFO: Init container init-volume-inlinevolume-wppj ready: true, restart count 2 Nov 26 01:33:07.128: INFO: Container test-container-subpath-inlinevolume-wppj ready: true, restart count 10 Nov 26 01:33:07.128: INFO: Container test-container-volume-inlinevolume-wppj ready: true, restart count 8 Nov 26 01:33:07.128: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:13:00 +0000 UTC (0+7 container statuses recorded) Nov 26 01:33:07.128: INFO: Container csi-attacher ready: false, restart count 6 Nov 26 01:33:07.128: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:33:07.128: INFO: Container csi-resizer ready: false, restart count 6 Nov 26 01:33:07.128: INFO: Container csi-snapshotter ready: false, restart count 6 Nov 26 01:33:07.128: INFO: Container hostpath ready: false, restart count 6 Nov 26 01:33:07.128: INFO: Container liveness-probe ready: false, restart count 6 Nov 26 01:33:07.128: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 26 01:33:07.128: INFO: external-local-nodeport-hpnxr started at 2022-11-26 01:00:15 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container netexec ready: true, restart count 5 Nov 26 01:33:07.128: INFO: hostpath-3-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container hostpath-3-client ready: false, restart count 4 Nov 26 01:33:07.128: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+7 container statuses recorded) Nov 26 01:33:07.128: INFO: Container csi-attacher ready: false, restart count 5 Nov 26 01:33:07.128: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 01:33:07.128: INFO: Container csi-resizer ready: false, restart count 5 Nov 26 01:33:07.128: INFO: Container csi-snapshotter ready: false, restart count 5 Nov 26 01:33:07.128: INFO: Container hostpath ready: false, restart count 5 Nov 26 01:33:07.128: INFO: Container liveness-probe ready: false, restart count 5 Nov 26 01:33:07.128: INFO: Container node-driver-registrar ready: false, restart count 5 Nov 26 01:33:07.128: INFO: back-off-cap started at 2022-11-26 01:08:51 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container back-off-cap ready: false, restart count 9 Nov 26 01:33:07.128: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:08:07 +0000 UTC (0+7 container statuses recorded) Nov 26 01:33:07.128: INFO: Container csi-attacher ready: true, restart count 7 Nov 26 01:33:07.128: INFO: Container csi-provisioner ready: true, restart count 7 Nov 26 01:33:07.128: INFO: Container csi-resizer ready: true, restart count 7 Nov 26 01:33:07.128: INFO: Container csi-snapshotter ready: true, restart count 7 Nov 26 01:33:07.128: INFO: Container hostpath ready: true, restart count 7 Nov 26 01:33:07.128: INFO: Container liveness-probe ready: true, restart count 7 Nov 26 01:33:07.128: INFO: Container node-driver-registrar ready: true, restart count 7 Nov 26 01:33:07.128: INFO: kube-proxy-bootstrap-e2e-minion-group-2982 started at 2022-11-26 00:56:38 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container kube-proxy ready: false, restart count 9 Nov 26 01:33:07.128: INFO: pod-configmaps-0039d476-e3ec-4d1f-95a0-589475853cfc started at 2022-11-26 01:02:20 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 01:33:07.128: INFO: hostexec-bootstrap-e2e-minion-group-2982-262gq started at 2022-11-26 00:59:26 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container agnhost-container ready: false, restart count 9 Nov 26 01:33:07.128: INFO: hostexec-bootstrap-e2e-minion-group-2982-xrccm started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container agnhost-container ready: false, restart count 6 Nov 26 01:33:07.128: INFO: pod-bed0f594-e6f2-4d1d-b243-e6b3a7adfbf2 started at 2022-11-26 01:03:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:33:07.128: INFO: var-expansion-8d1d368e-67cd-4a67-b256-8d870f10a0e2 started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container dapi-container ready: false, restart count 0 Nov 26 01:33:07.128: INFO: external-local-pods-tbf9t started at 2022-11-26 01:22:47 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container netexec ready: false, restart count 6 Nov 26 01:33:07.128: INFO: hostexec-bootstrap-e2e-minion-group-2982-fm6cq started at 2022-11-26 01:03:21 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container agnhost-container ready: false, restart count 7 Nov 26 01:33:07.128: INFO: metadata-proxy-v0.1-2rxjj started at 2022-11-26 00:56:38 +0000 UTC (0+2 container statuses recorded) Nov 26 01:33:07.128: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:33:07.128: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:33:07.128: INFO: external-local-update-rfn9p started at 2022-11-26 01:03:24 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container netexec ready: false, restart count 5 Nov 26 01:33:07.128: INFO: pod-subpath-test-preprovisionedpv-mkpm started at 2022-11-26 01:02:54 +0000 UTC (1+2 container statuses recorded) Nov 26 01:33:07.128: INFO: Init container init-volume-preprovisionedpv-mkpm ready: true, restart count 2 Nov 26 01:33:07.128: INFO: Container test-container-subpath-preprovisionedpv-mkpm ready: false, restart count 8 Nov 26 01:33:07.128: INFO: Container test-container-volume-preprovisionedpv-mkpm ready: false, restart count 7 Nov 26 01:33:07.128: INFO: hostpath-1-client started at 2022-11-26 01:03:13 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container hostpath-1-client ready: true, restart count 5 Nov 26 01:33:07.128: INFO: pod-5be3eec2-e823-4f42-901c-fd502ef8f0d6 started at 2022-11-26 00:59:19 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:33:07.128: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:13:00 +0000 UTC (0+7 container statuses recorded) Nov 26 01:33:07.128: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 01:33:07.128: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 01:33:07.128: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 01:33:07.128: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 01:33:07.128: INFO: Container hostpath ready: true, restart count 3 Nov 26 01:33:07.128: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 01:33:07.128: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 01:33:07.128: INFO: hostpath-2-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container hostpath-2-client ready: true, restart count 5 Nov 26 01:33:07.128: INFO: netserver-1 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container webserver ready: true, restart count 9 Nov 26 01:33:07.128: INFO: hostpath-0-client started at 2022-11-26 01:03:14 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container hostpath-0-client ready: true, restart count 5 Nov 26 01:33:07.128: INFO: hostexec-bootstrap-e2e-minion-group-2982-n9wzs started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 01:33:07.128: INFO: pod-subpath-test-inlinevolume-7tmj started at 2022-11-26 01:03:45 +0000 UTC (1+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Init container init-volume-inlinevolume-7tmj ready: true, restart count 0 Nov 26 01:33:07.128: INFO: Container test-container-subpath-inlinevolume-7tmj ready: false, restart count 0 Nov 26 01:33:07.128: INFO: lb-internal-8mn52 started at 2022-11-26 01:06:44 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container netexec ready: false, restart count 8 Nov 26 01:33:07.128: INFO: hostexec-bootstrap-e2e-minion-group-2982-x689s started at 2022-11-26 01:13:50 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.128: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 01:33:07.490: INFO: Latency metrics for node bootstrap-e2e-minion-group-2982 Nov 26 01:33:07.490: INFO: Logging node info for node bootstrap-e2e-minion-group-krkd Nov 26 01:33:07.532: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-krkd 793d73ff-a93b-4c26-a03e-336167d8e481 15677 0 2022-11-26 00:56:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-krkd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-krkd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2415":"bootstrap-e2e-minion-group-krkd","csi-hostpath-multivolume-6742":"bootstrap-e2e-minion-group-krkd","csi-hostpath-volumemode-9128":"bootstrap-e2e-minion-group-krkd"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 00:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 01:15:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 01:31:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 01:32:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-krkd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 01:31:48 +0000 UTC,LastTransitionTime:2022-11-26 00:56:41 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 00:56:54 +0000 UTC,LastTransitionTime:2022-11-26 00:56:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 01:31:09 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 01:31:09 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 01:31:09 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 01:31:09 +0000 UTC,LastTransitionTime:2022-11-26 00:56:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-krkd.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-krkd.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fdc8d24e89d871cca13350a32de1b46c,SystemUUID:fdc8d24e-89d8-71cc-a133-50a32de1b46c,BootID:14d1719a-3357-4298-85f2-160baff11885,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-1813^91a0fc90-6d25-11ed-88b9-c28a1eb064ec],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 01:33:07.533: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-krkd Nov 26 01:33:07.596: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-krkd Nov 26 01:33:07.703: INFO: hostexec-bootstrap-e2e-minion-group-krkd-2wbgn started at 2022-11-26 01:01:34 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.703: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 01:33:07.703: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:02:10 +0000 UTC (0+7 container statuses recorded) Nov 26 01:33:07.703: INFO: Container csi-attacher ready: true, restart count 9 Nov 26 01:33:07.703: INFO: Container csi-provisioner ready: true, restart count 9 Nov 26 01:33:07.703: INFO: Container csi-resizer ready: true, restart count 9 Nov 26 01:33:07.703: INFO: Container csi-snapshotter ready: true, restart count 9 Nov 26 01:33:07.703: INFO: Container hostpath ready: true, restart count 9 Nov 26 01:33:07.703: INFO: Container liveness-probe ready: true, restart count 9 Nov 26 01:33:07.703: INFO: Container node-driver-registrar ready: true, restart count 9 Nov 26 01:33:07.703: INFO: hostexec-bootstrap-e2e-minion-group-krkd-4bh2r started at 2022-11-26 00:59:05 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.703: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 01:33:07.703: INFO: pod-subpath-test-preprovisionedpv-snr7 started at 2022-11-26 00:59:30 +0000 UTC (1+2 container statuses recorded) Nov 26 01:33:07.703: INFO: Init container init-volume-preprovisionedpv-snr7 ready: true, restart count 8 Nov 26 01:33:07.703: INFO: Container test-container-subpath-preprovisionedpv-snr7 ready: true, restart count 10 Nov 26 01:33:07.703: INFO: Container test-container-volume-preprovisionedpv-snr7 ready: true, restart count 10 Nov 26 01:33:07.703: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+3 container statuses recorded) Nov 26 01:33:07.703: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:33:07.703: INFO: Container driver-registrar ready: false, restart count 6 Nov 26 01:33:07.703: INFO: Container mock ready: false, restart count 6 Nov 26 01:33:07.703: INFO: pod-back-off-image started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.703: INFO: Container back-off ready: false, restart count 11 Nov 26 01:33:07.703: INFO: kube-proxy-bootstrap-e2e-minion-group-krkd started at 2022-11-26 00:56:37 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.703: INFO: Container kube-proxy ready: true, restart count 10 Nov 26 01:33:07.703: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+4 container statuses recorded) Nov 26 01:33:07.703: INFO: Container busybox ready: false, restart count 8 Nov 26 01:33:07.703: INFO: Container csi-provisioner ready: false, restart count 8 Nov 26 01:33:07.703: INFO: Container driver-registrar ready: false, restart count 10 Nov 26 01:33:07.703: INFO: Container mock ready: false, restart count 10 Nov 26 01:33:07.703: INFO: coredns-6d97d5ddb-bw2sm started at 2022-11-26 00:57:04 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.703: INFO: Container coredns ready: false, restart count 11 Nov 26 01:33:07.703: INFO: csi-hostpathplugin-0 started at 2022-11-26 00:59:51 +0000 UTC (0+7 container statuses recorded) Nov 26 01:33:07.703: INFO: Container csi-attacher ready: true, restart count 7 Nov 26 01:33:07.703: INFO: Container csi-provisioner ready: true, restart count 7 Nov 26 01:33:07.703: INFO: Container csi-resizer ready: true, restart count 7 Nov 26 01:33:07.703: INFO: Container csi-snapshotter ready: true, restart count 7 Nov 26 01:33:07.703: INFO: Container hostpath ready: true, restart count 7 Nov 26 01:33:07.703: INFO: Container liveness-probe ready: true, restart count 7 Nov 26 01:33:07.703: INFO: Container node-driver-registrar ready: true, restart count 7 Nov 26 01:33:07.703: INFO: csi-hostpathplugin-0 started at 2022-11-26 01:14:48 +0000 UTC (0+7 container statuses recorded) Nov 26 01:33:07.703: INFO: Container csi-attacher ready: false, restart count 6 Nov 26 01:33:07.703: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 01:33:07.703: INFO: Container csi-resizer ready: false, restart count 6 Nov 26 01:33:07.703: INFO: Container csi-snapshotter ready: false, restart count 6 Nov 26 01:33:07.703: INFO: Container hostpath ready: false, restart count 6 Nov 26 01:33:07.703: INFO: Container liveness-probe ready: false, restart count 6 Nov 26 01:33:07.703: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 26 01:33:07.703: INFO: konnectivity-agent-qtkxb started at 2022-11-26 00:56:54 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.703: INFO: Container konnectivity-agent ready: true, restart count 9 Nov 26 01:33:07.703: INFO: ss-2 started at 2022-11-26 01:03:10 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.703: INFO: Container webserver ready: true, restart count 9 Nov 26 01:33:07.703: INFO: csi-mockplugin-0 started at 2022-11-26 00:59:07 +0000 UTC (0+3 container statuses recorded) Nov 26 01:33:07.703: INFO: Container csi-provisioner ready: false, restart count 7 Nov 26 01:33:07.703: INFO: Container driver-registrar ready: false, restart count 7 Nov 26 01:33:07.703: INFO: Container mock ready: false, restart count 7 Nov 26 01:33:07.703: INFO: pvc-volume-tester-5lrn7 started at 2022-11-26 00:59:26 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.703: INFO: Container volume-tester ready: false, restart count 0 Nov 26 01:33:07.703: INFO: netserver-2 started at 2022-11-26 01:02:08 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.703: INFO: Container webserver ready: false, restart count 9 Nov 26 01:33:07.703: INFO: pod-a74d383c-dbf7-4f6c-8968-69afbaf5f366 started at 2022-11-26 01:22:26 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.703: INFO: Container write-pod ready: false, restart count 0 Nov 26 01:33:07.703: INFO: metadata-proxy-v0.1-qzrwj started at 2022-11-26 00:56:38 +0000 UTC (0+2 container statuses recorded) Nov 26 01:33:07.703: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 01:33:07.703: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 01:33:07.703: INFO: netserver-2 started at 2022-11-26 01:00:22 +0000 UTC (0+1 container statuses recorded) Nov 26 01:33:07.703: INFO: Container webserver ready: false, restart count 10 Nov 26 01:33:07.957: INFO: Latency metrics for node bootstrap-e2e-minion-group-krkd [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-9254" for this suite. 11/26/22 01:33:07.957
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sTCP\sservice\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000e564b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:03:58.391 Nov 26 01:03:58.391: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 01:03:58.392 Nov 26 01:03:58.432: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:00.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:02.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:04.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:06.471: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:08.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:10.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:12.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:14.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:16.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:18.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:20.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:22.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:24.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:26.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:28.472: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:28.511: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:04:28.511: INFO: Unexpected error: <*errors.errorString | 0xc000215d80>: { s: "timed out waiting for the condition", } Nov 26 01:04:28.511: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000e564b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 01:04:28.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:04:28.551 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\screate\sLoadBalancer\sService\swithout\sNodePort\sand\schange\sit\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011504b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:00:57.119 Nov 26 01:00:57.119: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 01:00:57.12 Nov 26 01:00:57.160: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:00:59.200: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:01.199: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:03.200: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:05.200: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:07.199: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:09.199: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:11.199: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:13.199: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:15.200: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:17.199: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:19.199: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:21.199: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:23.199: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:25.199: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:27.200: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:27.239: INFO: Unexpected error while creating namespace: Post "https://34.168.44.214/api/v1/namespaces": dial tcp 34.168.44.214:443: connect: connection refused Nov 26 01:01:27.239: INFO: Unexpected error: <*errors.errorString | 0xc00017da50>: { s: "timed out waiting for the condition", } Nov 26 01:01:27.239: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011504b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 01:01:27.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 01:01:27.279 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\screate\san\sinternal\stype\sload\sbalancer\s\[Slow\]$'
test/e2e/network/loadbalancer.go:638 k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:638 +0x634 There were additional failures detected after the initial failure: [FAILED] Nov 26 01:29:09.377: Couldn't delete ns: "loadbalancers-5274": Delete "https://34.168.44.214/api/v1/namespaces/loadbalancers-5274": dial tcp 34.168.44.214:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.168.44.214/api/v1/namespaces/loadbalancers-5274", Err:(*net.OpError)(0xc0032dd8b0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 01:06:41.6 Nov 26 01:06:41.600: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 01:06:41.603 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 01:06:41.998 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 01:06:42.087 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to create an internal type load balancer [Slow] test/e2e/network/loadbalancer.go:571 STEP: creating pod to be part of service lb-internal 11/26/22 01:06:44.472 Nov 26 01:06:44.524: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 01:06:44.586: INFO: Found all 1 pods Nov 26 01:06:44.586: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [lb-internal-8mn52] Nov 26 01:06:44.586: INFO: Waiting up to 2m0s for pod "lb-internal-8mn52" in namespace "loadbalancers-5274" to be "running and ready" Nov 26 01:06:44.638: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 51.737369ms Nov 26 01:06:44.638: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:06:46.700: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114088854s Nov 26 01:06:46.700: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:06:48.716: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129518717s Nov 26 01:06:48.716: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:06:50.722: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136027084s Nov 26 01:06:50.722: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:06:52.828: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241350635s Nov 26 01:06:52.828: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:06:54.708: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 10.122222723s Nov 26 01:06:54.708: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:06:56.772: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 12.185972757s Nov 26 01:06:56.772: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:06:58.697: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 14.110858548s Nov 26 01:06:58.697: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:07:00.733: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 16.14668903s Nov 26 01:07:00.733: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:07:02.728: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 18.141319943s Nov 26 01:07:02.728: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:07:04.745: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 20.158758311s Nov 26 01:07:04.745: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:07:06.694: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 22.107410864s Nov 26 01:07:06.694: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:07:08.798: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 24.211996619s Nov 26 01:07:08.798: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:07:10.738: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 26.151486738s Nov 26 01:07:10.738: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:07:12.746: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 28.159778936s Nov 26 01:07:12.746: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:07:14.708: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 30.121395459s Nov 26 01:07:14.708: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:07:16.793: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 32.206965733s Nov 26 01:07:16.793: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:07:18.706: INFO: Pod "lb-internal-8mn52": Phase="Pending", Reason="", readiness=false. Elapsed: 34.119468144s Nov 26 01:07:18.706: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' to be 'Running' but was 'Pending' Nov 26 01:07:20.819: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 36.232581879s Nov 26 01:07:20.819: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:22.695: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 38.108830826s Nov 26 01:07:22.695: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:24.787: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 40.200639519s Nov 26 01:07:24.787: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:26.691: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 42.105014303s Nov 26 01:07:26.691: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:28.688: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 44.101802927s Nov 26 01:07:28.688: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:30.705: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 46.118779921s Nov 26 01:07:30.705: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:32.694: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 48.107742502s Nov 26 01:07:32.694: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:34.710: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 50.123666711s Nov 26 01:07:34.710: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:36.705: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 52.119084131s Nov 26 01:07:36.705: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:38.700: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 54.113981523s Nov 26 01:07:38.700: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:40.690: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 56.103771764s Nov 26 01:07:40.690: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:42.714: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 58.127625951s Nov 26 01:07:42.714: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:44.703: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.116826018s Nov 26 01:07:44.703: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:46.715: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.128399769s Nov 26 01:07:46.715: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:48.725: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.138347211s Nov 26 01:07:48.725: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:50.693: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.107180456s Nov 26 01:07:50.693: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:52.778: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.192101695s Nov 26 01:07:52.778: INFO: Error evaluating pod condition running and ready: pod 'lb-internal-8mn52' on 'bootstrap-e2e-minion-group-2982' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:07:00 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 01:06:44 +0000 UTC }] Nov 26 01:07:54.772: INFO: Pod "lb-internal-8mn52": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.1